Search Results

Search found 23323 results on 933 pages for 'worst is better'.

Page 495/933 | < Previous Page | 491 492 493 494 495 496 497 498 499 500 501 502  | Next Page >

  • RenderPartial a view from another controller (and in another folder)

    - by George
    Hey MVC experts. I two database entities that i need to represent and i need to output them in a single page. I have something like this Views Def ViewA ViewB Test ViewC I want to ViewC to display ViewA, which displays ViewB. Right now i'm using something like this: // View C <!-- bla --> <% Html.RenderPartial(Url.Content("../Definition/DefinitionDetails"), i); %> // View A <!-- bla --> <% Html.RenderPartial(Url.Content("../Definition/DefinitionEditActions")); %> Is there a better to do this? I find that linking with relative pathnames can burn you. Any tips? Any chance I can make somehtiing like... Html.RenderPartial("Definition","DefinitionDetails",i); ? Thanks for the help

    Read the article

  • How can I get TFS 2010 to build each project to a separate directory?

    - by Jonathan Schuster
    In our project, we'd like to have our TFS build put each project into its own folder under the drop folder, instead of dropping all of the files into one flat structure. To illustrate, we'd like to see something like this: DropFolder/ Foo/ foo.exe Bar/ bar.dll Baz baz.dll This is basically the same question as was asked here, but now that we're using workflow-based builds, those solutions don't seem to work. The solution using the CustomizableOutDir property looked like it would work best for us, but I can't get that property to be recognized. I customized our workflow to pass it in to MSBuild as a command line argument (/p:CustomizableOutDir=true), but it seems MSBuild just ignores it and puts the output into the OutDir given by the workflow. I looked at the build logs, and I can see that the CustomizableOutDir and OutDir properties are both getting set in the command line args to MSBuild. I still need OutDir to be passed in so that I can copy my files to TeamBuildOutDir at the end. Any idea why my CustomizableOutDir parameter isn't getting recognized, or if there's a better way to achieve this?

    Read the article

  • XNA Class Design with Structs

    - by Nate Bross
    I'm wondering how you'd recommend designin a class, given the fact that XNA Framework uses Struct all over the place? For example, a spite class, which may require a Vector2 and a Rectangle (both defined as Struct) to be accessed outside of the class. The issue come in when you try to write code like this: class Item { public Vetor2 Position {get; set;} public Item() { Position = new Vector2(5,5); } } Item i = new Item(); i.Positon.X = 20; // fails with error 'Cannot modify the return value of Item because it is not a variable.' // you must write code like this var pos = i.Position; pos.X++; i.Position = pos; The second option compiles and works, but it is just butt ugly. Is there a better way?

    Read the article

  • Would the MSFT Report Viewer (rdlc) Work with MVC

    - by Mike Thomas
    I am interested in learning MVC, and have experimented with a couple of the sample apps. As a project, I'd like to move part or all of my own office app to MVC. An important part of this app, and of ALL of my apps for customers, is the printing of invoices, purchase orders, inventory lists and so forth. In fact, one of their main criteria for judging what we do is the appearance of these documents and their incorporation into the app in a practical, intuitive way. It's impossible for me to get by without a report writer. The MSFT report viewer used to produce rdlc reports has been sufficient, and even comes up better than the competition in a couple of key areas. Does this control work with an ASP.NET MVC application?

    Read the article

  • issues horizontal scrolling using jQuery.ScrollTo / jQuery.SerialScroll

    - by kapil.israni
    Hi, I am trying to develop auto horizontal scrolling for our website using - jQuery.ScrollTo / jQuery.SerialScroll. I am not sure if this is the best jquery library to do that, but if there's something better, please let me know. Here's the behavior that I want, check out foursquare's "Recent Activity" list. The data that will refresh will come from a ajax request that I make every few seconds using window.setInterval. I am not really a CSS/java script guy so I havent been able to figure out jQuery.SerialScroll. Here's the website - look at the "Live job Feeds" list. Currently the list does refresh the data coming from the ajax call, but I dont see the effect, the animation, in fact I dont even think serialScroll is being used. Right now I am doing a - $('#feed-ticker').prepend(content) to pre-append the newly arrived data. You can do a view source to look at the current code. Any help would be really appreciated. Thanks.

    Read the article

  • CLSF & CLK 2013 Trip Report by Jeff Liu

    - by jamesmorris
    This is a contributed post from Jeff Liu, lead XFS developer for the Oracle mainline Linux kernel team. Recently, I attended both the China Linux Storage and Filesystem workshop (CLSF), and the China Linux Kernel conference (CLK), which were held in Shanghai. Here are the highlights for both events. CLSF - 17th October XFS update (led by Jeff Liu) XFS keeps rapid progress with a lot of changes, especially focused on the infrastructure/performance improvements as well as  new feature development.  This can be reflected with a sample statistics among XFS/Ext4+JBD2/Btrfs via: # git diff --stat --minimal -C -M v3.7..v3.12-rc4 -- fs/xfs|fs/ext4+fs/jbd2|fs/btrfs XFS: 141 files changed, 27598 insertions(+), 19113 deletions(-) Ext4+JBD2: 39 files changed, 10487 insertions(+), 5454 deletions(-) Btrfs: 70 files changed, 19875 insertions(+), 8130 deletions(-) What made up those changes in XFS? Self-describing metadata(CRC32c). This is a new feature and it contributed about 70% code changes, it can be enabled via `mkfs.xfs -m crc=1 /dev/xxx` for v5 superblock. Transaction log space reservation improvements. With this change, we can calculate the log space reservation at mount time rather than runtime to reduce the the CPU overhead. User namespace support. So both XFS and USERNS can be enabled on kernel configuration begin from Linux 3.10. Thanks Dwight Engen's efforts for this thing. Split project/group quota inodes. Originally, project quota can not be enabled with group quota at the same time because they were share the same quota file inode, now it works but only for v5 super block. i.e, CRC enabled. CONFIG_XFS_WARN, an new lightweight runtime debugger which can be deployed in production environment. Readahead log object recovery, this change can speed up the log replay progress significantly. Speculative preallocation inode tracking, clearing and throttling. The main purpose is to deal with inodes with post-EOF space due to speculative preallocation, support improved quota management to free up a significant amount of unwritten space when at or near EDQUOT. It support backgroup scanning which occurs on a longish interval(5 mins by default, tunable), and on-demand scanning/trimming via ioctl(2). Bitter arguments ensued from this session, especially for the comparison between Ext4 and Btrfs in different areas, I have to spent a whole morning of the 1st day answering those questions. We basically agreed on XFS is the best choice in Linux nowadays because: Stable, XFS has a good record in stability in the past 10 years. Fengguang Wu who lead the 0-day kernel test project also said that he has observed less error than other filesystems in the past 1+ years, I own it to the XFS upstream code reviewer, they always performing serious code review as well as testing. Good performance for large/small files, XFS does not works very well for small files has already been an old story for years. Best choice (maybe) for distributed PB filesystems. e.g, Ceph recommends delopy OSD daemon on XFS because Ext4 has limited xattr size. Best choice for large storage (>16TB). Ext4 does not support a single file more than around 15.95TB. Scalability, any objection to XFS is best in this point? :) XFS is better to deal with transaction concurrency than Ext4, why? The maximum size of the log in XFS is 2038MB compare to 128MB in Ext4. Misc. Ext4 is widely used and it has been proved fast/stable in various loads and scenarios, XFS just need more customers, and Btrfs is still on the road to be a manhood. Ceph Introduction (Led by Li Wang) This a hot topic.  Li gave us a nice introduction about the design as well as their current works. Actually, Ceph client has been included in Linux kernel since 2.6.34 and supported by Openstack since Folsom but it seems that it has not yet been widely deployment in production environment. Their major work is focus on the inline data support to separate the metadata and data storage, reduce the file access time, i.e, a file access need communication twice, fetch the metadata from MDS and then get data from OSD, and also, the small file access is limited by the network latency. The solution is, for the small files they would like to store the data at metadata so that when accessing a small file, the metadata server can push both metadata and data to the client at the same time. In this way, they can reduce the overhead of calculating the data offset and save the communication to OSD. For this feature, they have only run some small scale testing but really saw noticeable improvements. Test environment: Intel 2 CPU 12 Core, 64GB RAM, Ubuntu 12.04, Ceph 0.56.6 with 200GB SATA disk, 15 OSD, 1 MDS, 1 MON. The sequence read performance for 1K size files improved about 50%. I have asked Li and Zheng Yan (the core developer of Ceph, who also worked on Btrfs) whether Ceph is really stable and can be deployed at production environment for large scale PB level storage, but they can not give a positive answer, looks Ceph even does not spread over Dreamhost (subject to confirmation). From Li, they only deployed Ceph for a small scale storage(32 nodes) although they'd like to try 6000 nodes in the future. Improve Linux swap for Flash storage (led by Shaohua Li) Because of high density, low power and low price, flash storage (SSD) is a good candidate to partially replace DRAM. A quick answer for this is using SSD as swap. But Linux swap is designed for slow hard disk storage, so there are a lot of challenges to efficiently use SSD for swap. SWAPOUT swap_map scan swap_map is the in-memory data structure to track swap disk usage, but it is a slow linear scan. It will become a bottleneck while finding many adjacent pages in the use of SSD. Shaohua Li have changed it to a cluster(128K) list, resulting in O(1) algorithm. However, this apporoach needs restrictive cluster alignment and only enabled for SSD. IO pattern In most cases, the swap io is in interleaved pattern because of mutiple reclaimers or a free cluster is shared by all reclaimers. Even though block layer can merge interleaved IO to some extent, but we cannot count on it completely. Hence the per-cpu cluster is added base on the previous change, it can help reclaimer do sequential IO and the block layer will be easier to merge IO. TLB flush: If we're reclaiming one active page, we should first move the page from active lru list to inactive lru list, and then reclaim the page from inactive lru to swap it out. During the process, we need to clear PTE twice: first is 'A'(ACCESS) bit, second is 'P'(PRESENT) bit. Processors need to send lots of ipi which make the TLB flush really expensive. Some works have been done to improve this, including rework smp_call_functiom_many() or remove the first TLB flush in x86, but there still have some arguments here and only parts of works have been pushed to mainline. SWAPIN: Page fault does iodepth=1 sync io, but it's a little waste if only issue a page size's IO. The obvious solution is doing swap readahead. But the current in-kernel swap readahead is arbitary(always 8 pages), and it always doesn't perform well for both random and sequential access workload. Shaohua introduced a new flag for madvise(MADV_WILLNEED) to do swap prefetch, so the changes happen in userspace API and leave the in-kernel readahead unchanged(but I think some improvement can also be done here). SWAP discard As we know, discard is important for SSD write throughout, but the current swap discard implementation is synchronous. He changed it to async discard which allow discard and write run in the same time. Meanwhile, the unit of discard is also optimized to cluster. Misc: lock contention For many concurrent swapout and swapin , the lock contention such as anon_vma or swap_lock is high, so he changed the swap_lock to a per-swap lock. But there still have some lock contention in very high speed SSD because of swapcache address_space lock. Zproject (led by Bob Liu) Bob gave us a very nice introduction about the current memory compression status. Now there are 3 projects(zswap/zram/zcache) which all aim at smooth swap IO storm and promote performance, but they all have their own pros and cons. ZSWAP It is implemented based on frontswap API and it uses a dynamic allocater named Zbud to allocate free pages. Zbud means pairs of zpages are "buddied" and it can only store at most two compressed pages in one page frame, so the max compress ratio is 50%. Each page frame is lru-linked and can do shink in memory pressure. If the compressed memory pool reach its limitation, shink or reclaim happens. It decompress the page frame into two new allocated pages and then write them to real swap device, but it can fail when allocating the two pages. ZRAM Acts as a compressed ramdisk and used as swap device, and it use zsmalloc as its allocator which has high density but may have fragmentation issues. Besides, page reclaim is hard since it will need more pages to uncompress and free just one page. ZRAM is preferred by embedded system which may not have any real swap device. Now both ZRAM and ZSWAP are in driver/staging tree, and in the mm community there are some disscussions of merging ZRAM into ZSWAP or viceversa, but no agreement yet. ZCACHE Handles file page compression but it is removed out of staging recently. From industry (led by Tang Jie, LSI) An LSI engineer introduced several new produces to us. The first is raid5/6 cards that it use full stripe writes to improve performance. The 2nd one he introduced is SandForce flash controller, who can understand data file types (data entropy) to reduce write amplification (WA) for nearly all writes. It's called DuraWrite and typical WA is 0.5. What's more, if enable its Dynamic Logical Capacity function module, the controller can do data compression which is transparent to upper layer. LSI testing shows that with this virtual capacity enables 1x TB drive can support up to 2x TB capacity, but the application must monitor free flash space to maintain optimal performance and to guard against free flash space exhaustion. He said the most useful application is for datebase. Another thing I think it's worth to mention is that a NV-DRAM memory in NMR/Raptor which is directly exposed to host system. Applications can directly access the NV-DRAM via a memory address - using standard system call mmap(). He said that it is very useful for database logging now. This kind of NVM produces are beginning to appear in recent years, and it is said that Samsung is building a research center in China for related produces. IMHO, NVM will bring an effect to current os layer especially on file system, e.g. its journaling may need to redesign to fully utilize these nonvolatile memory. OCFS2 (led by Canquan Shen) Without a doubt, HuaWei is the biggest contributor to OCFS2 in the past two years. They have posted 46 upstream patches and 39 patches have been merged. Their current project is based on 32/64 nodes cluster, but they also tried 128 nodes at the experimental stage. The major work they are working is to support ATS (atomic test and set), it can be works with DLM at the same time. Looks this idea is inspired by the vmware VMFS locking, i.e, http://blogs.vmware.com/vsphere/2012/05/vmfs-locking-uncovered.html CLK - 18th October 2013 Improving Linux Development with Better Tools (Andi Kleen) This talk focused on how to find/solve bugs along with the Linux complexity growing. Generally, we can do this with the following kind of tools: Static code checkers tools. e.g, sparse, smatch, coccinelle, clang checker, checkpatch, gcc -W/LTO, stanse. This can help check a lot of things, simple mistakes, complex problems, but the challenges are: some are very slow, false positives, may need a concentrated effort to get false positives down. Especially, no static checker I found can follow indirect calls (“OO in C”, common in kernel): struct foo_ops { int (*do_foo)(struct foo *obj); } foo->do_foo(foo); Dynamic runtime checkers, e.g, thread checkers, kmemcheck, lockdep. Ideally all kernel code would come with a test suite, then someone could run all the dynamic checkers. Fuzzers/test suites. e.g, Trinity is a great tool, it finds many bugs, but needs manual model for each syscall. Modern fuzzers around using automatic feedback, but notfor kernel yet: http://taviso.decsystem.org/making_software_dumber.pdf Debuggers/Tracers to understand code, e.g, ftrace, can dump on events/oops/custom triggers, but still too much overhead in many cases to run always during debug. Tools to read/understand source, e.g, grep/cscope work great for many cases, but do not understand indirect pointers (OO in C model used in kernel), give us all “do_foo” instances: struct foo_ops { int (*do_foo)(struct foo *obj); } = { .do_foo = my_foo }; foo>do_foo(foo); That would be great to have a cscope like tool that understands this based on types/initializers XFS: The High Performance Enterprise File System (Jeff Liu) [slides] I gave a talk for introducing the disk layout, unique features, as well as the recent changes.   The slides include some charts to reflect the performances between XFS/Btrfs/Ext4 for small files. About a dozen users raised their hands when I asking who has experienced with XFS. I remembered that when I asked the same question in LinuxCon/Japan, only 3 people raised their hands, but they are Chris Mason, Ric Wheeler, and another attendee. The attendee questions were mainly focused on stability, and comparison with other file systems. Linux Containers (Feng Gao) The speaker introduced us that the purpose for those kind of namespaces, include mount/UTS/IPC/Network/Pid/User, as well as the system API/ABI. For the userspace tools, He mainly focus on the Libvirt LXC rather than us(LXC). Libvirt LXC is another userspace container management tool, implemented as one type of libvirt driver, it can manage containers, create namespace, create private filesystem layout for container, Create devices for container and setup resources controller via cgroup. In this talk, Feng also mentioned another two possible new namespaces in the future, the 1st is the audit, but not sure if it should be assigned to user namespace or not. Another is about syslog, but the question is do we really need it? In-memory Compression (Bob Liu) Same as CLSF, a nice introduction that I have already mentioned above. Misc There were some other talks related to ACPI based memory hotplug, smart wake-affinity in scheduler etc., but my head is not big enough to record all those things. -- Jeff Liu

    Read the article

  • Resolving MSB3247 - Found conflicts between different versions of the same dependent assembly

    - by David Gardiner
    A .NET 3.5 solution ended up with this warning when compiling with msbuild. Sometimes NDepend might help out but in this case it didn't give any further details. Like Bob I ended up having to resort to opening each assembly in ILDASM until I found the one that was referencing an older version of the dependant assembly. I did try using MSBUILD from VS 2010 Beta 2 (as the Connect article indicated this was fixed in the next version of the CLR) but that didn't provide any more detail either (maybe fixed post Beta 2) Is there a better (more automated) approach?

    Read the article

  • PHP form script error

    - by Alex
    Hi, I have created a rather larger html form and I would like the data to send to my email address. I am using the POST method and thought my PHP was up to snuff. However, I now get the following error upon submission: Parse error: syntax error, unexpected '}' in C:\www\mo\marinecforum\send_form_application.php on line 90. I am having a hell of a time with this. Beyond the error above, I wonder if there is a better way to approach this? Here is the PHP: http://pastebin.com/MKUcgihg Many thanks, Alex

    Read the article

  • Tree View / File View Control for C#

    - by Jason
    I have been looking for a C# tree control for displaying a file system that has the following capabilities: Select a starting directory. I don't always want to start at a "default" top directory level. The ability to grab an event when the user double clicks on a file in the tree. I want to handle opening the file within my application. I have been looking at this C# File Browser. Unfortunately, I have not been able to figure out how to do meet my second need. (If anybody can clear that up for me, I would like that even better.) Thanks for any help.

    Read the article

  • Building a jQuery Plug-in to make an HTML Table scrollable

    - by Rick Strahl
    Today I got a call from a customer and we were looking over an older application that uses a lot of tables to display financial and other assorted data. The application is mostly meta-data driven with lots of layout formatting automatically driven through meta data rather than through explicit hand coded HTML layouts. One of the problems in this apps are tables that display a non-fixed amount of data. The users of this app don't want to use paging to see more data, but instead want to display overflow data using a scrollbar. Many of the forms are very densely populated, often with multiple data tables that display a few rows of data in the UI at the most. This sort of layout does not lend itself well to paging, but works much better with scrollable data. Unfortunately scrollable tables are not easily created. HTML Tables are mangy beasts as anybody who's done any sort of Web development knows. Tables are finicky when it comes to styling and layout, and they have many funky quirks, especially when it comes to scrolling both of the table rows themselves or even the child columns. There's no built-in way to make tables scroll and to lock headers while you do, and while you can embed a table (or anything really) into a scrolling div with something like this: <div style="position:relative; overflow: hidden; overflow-y: scroll; height: 200px; width: 400px;"> <table id="table" style="width: 100%" class="blackborder" > <thead> <tr class="gridheader"> <th>Column 1</th> <th>Column 2</th> <th>Column 3</th> <th >Column 4</th> </tr> </thead> <tbody> <tr> <td>Column 1 Content</td> <td>Column 2 Content</td> <td>Column 3 Content</td> <td>Column 4 Content</td> </tr> <tr> <td>Column 1 Content</td> <td>Column 2 Content</td> <td>Column 3 Content</td> <td>Column 4 Content</td> </tr> … </tbody> </table> </div> </div> that won't give a very satisfying visual experience: Both the header and body scroll which looks odd. You lose context as soon as the header scrolls off the top and when you reach the bottom of the list the bottom outline of the table shows which also looks off. The the side bar shows all the way down the length of the table yet another visual miscue. In a pinch this will work, but it's ugly. What's out there? Before we go further here you should know that there are a few capable grid plug-ins out there already. Among them: Flexigrid (can work of any table as well as with AJAX data) jQuery Scrollable Table Plug-in (feature similar to what I need but not quite) jqGrid (mostly an Ajax Grid which is very powerful and works very well) But in the end none of them fit the bill of what I needed in this situation. All of these require custom CSS and some of them are fairly complex to restyle. Others are AJAX only or work better with AJAX loaded data. However, I need to actually try (as much as possible) to maintain the original styling of the tables without requiring extensive re-styling. Building the makeTableScrollable() Plug-in To make a table scrollable requires rearranging the table a bit. In the plug-in I built I create two <div> tags and split the table into two: one for the table header and one for the table body. The bottom <div> tag then contains only the table's row data and can be scrolled while the header stays fixed. Using jQuery the basic idea is pretty simple: You create the divs, copy the original table into the bottom, then clone the table, clear all content append the <thead> section, into new table and then copy that table into the second header <div>. Easy as pie, right? Unfortunately it's a bit more complicated than that as it's tricky to get the width of the table right to account for the scrollbar (by adding a small column) and making sure the borders properly line up for the two tables. A lot of style settings have to be made to ensure the table is a fixed size, to remove and reattach borders, to add extra space to allow for the scrollbar and so forth. The end result of my plug-in is a table with a scrollbar. Using the same table I used earlier the result looks like this: To create it, I use the following jQuery plug-in logic to select my table and run the makeTableScrollable() plug-in against the selector: $("#table").makeTableScrollable( { cssClass:"blackborder"} ); Without much further ado, here's the short code for the plug-in: (function ($) { $.fn.makeTableScrollable = function (options) { return this.each(function () { var $table = $(this); var opt = { // height of the table height: "250px", // right padding added to support the scrollbar rightPadding: "10px", // cssclass used for the wrapper div cssClass: "" } $.extend(opt, options); var $thead = $table.find("thead"); var $ths = $thead.find("th"); var id = $table.attr("id"); var cssClass = $table.attr("class"); if (!id) id = "_table_" + new Date().getMilliseconds().ToString(); $table.width("+=" + opt.rightPadding); $table.css("border-width", 0); // add a column to all rows of the table var first = true; $table.find("tr").each(function () { var row = $(this); if (first) { row.append($("<th>").width(opt.rightPadding)); first = false; } else row.append($("<td>").width(opt.rightPadding)); }); // force full sizing on each of the th elemnts $ths.each(function () { var $th = $(this); $th.css("width", $th.width()); }); // Create the table wrapper div var $tblDiv = $("<div>").css({ position: "relative", overflow: "hidden", overflowY: "scroll" }) .addClass(opt.cssClass); var width = $table.width(); $tblDiv.width(width).height(opt.height) .attr("id", id + "_wrapper") .css("border-top", "none"); // Insert before $tblDiv $tblDiv.insertBefore($table); // then move the table into it $table.appendTo($tblDiv); // Clone the div for header var $hdDiv = $tblDiv.clone(); $hdDiv.empty(); var width = $table.width(); $hdDiv.attr("style", "") .css("border-bottom", "none") .width(width) .attr("id", id + "_wrapper_header"); // create a copy of the table and remove all children var $newTable = $($table).clone(); $newTable.empty() .attr("id", $table.attr("id") + "_header"); $thead.appendTo($newTable); $hdDiv.insertBefore($tblDiv); $newTable.appendTo($hdDiv); $table.css("border-width", 0); }); } })(jQuery); Oh sweet spaghetti code :-) The code starts out by dealing the parameters that can be passed in the options object map: height The height of the full table/structure. The height of the outside wrapper container. Defaults to 200px. rightPadding The padding that is added to the right of the table to account for the scrollbar. Creates a column of this width and injects it into the table. If too small the rightmost column might get truncated. if too large the empty column might show. cssClass The CSS class of the wrapping container that appears to wrap the table. If you want a border around your table this class should probably provide it since the plug-in removes the table border. The rest of the code is obtuse, but pretty straight forward. It starts by creating a new column in the table to accommodate the width of the scrollbar and avoid clipping of text in the rightmost column. The width of the columns is explicitly set in the header elements to force the size of the table to be fixed and to provide the same sizing when the THEAD section is moved to a new copied table later. The table wrapper div is created, formatted and the table is moved into it. The new wrapper div is cloned for the header wrapper and configured. Finally the actual table is cloned and cleared of all elements. The original table's THEAD section is then moved into the new table. At last the new table is added to the header <div>, and the header <div> is inserted before the table wrapper <div>. I'm always amazed how easy jQuery makes it to do this sort of re-arranging, and given of what's happening the amount of code is rather small. Disclaimer: Your mileage may vary A word of warning: I make no guarantees about the code above. It's a first cut and I provided this here mainly to demonstrate the concepts of decomposing and reassembling an HTML layout :-) which jQuery makes so nice and easy. I tested this component against the typical scenarios we plan on using it for which are tables that use a few well known styles (or no styling at all). I suspect if you have complex styling on your <table> tag that things might not go so well. If you plan on using this plug-in you might want to minimize your styling of the table tag and defer any border formatting using the class passed in via the cssClass parameter, which ends up on the two wrapper div's that wrap the header and body rows. There's also no explicit support for footers. I rarely if ever use footers (when not using paging that is), so I didn't feel the need to add footer support. However, if you need that it's not difficult to add - the logic is the same as adding the header. The plug-in relies on a well-formatted table that has THEAD and TBODY sections along with TH tags in the header. Note that ASP.NET WebForm DataGrids and GridViews by default do not generate well-formatted table HTML. You can look at my Adding proper THEAD sections to a GridView post for more info on how to get a GridView to render properly. The plug-in has no dependencies other than jQuery. Even with the limitations in mind I hope this might be useful to some of you. I know I've already identified a number of places in my own existing applications where I will be plugging this in almost immediately. Resources Download Sample and Plug-in code Latest version in the West Wind Web & AJAX Toolkit Repository © Rick Strahl, West Wind Technologies, 2005-2011Posted in jQuery  HTML  ASP.NET  

    Read the article

  • Best way to detect integer overflow in C/C++

    - by Chris Johnson
    I was writing a program in C++ to find all solutions of a^b = c (a to the power of b), where a, b and c together use all the digits 0-9 exactly once. The program looped over values of a and b, and ran a digit-counting routine each time on a, b and a^b to check if the digits condition was satisfied. However, spurious solutions can be generated when a^b overflows the integer limit. I ended up checking for this using code like: unsigned long b, c, c_test; ... c_test=c*b; // Possible overflow if (c_test/b != c) {/* There has been an overflow*/} else c=c_test; // No overflow Is there a better way of testing for overflow? I know that some chips have an internal flag that is set when overflow occurs, but I've never seen it accessed through C or C++.

    Read the article

  • Is there a way to mark up code to tell ReSharper not to format it?

    - by adrianbanks
    I quite often use the ReSharper "Clean Up Code" command to format my code to our coding style before checking it into source control. This works well in general, but some bits of code are better formatted manually (eg. because of the indenting rules in ReSharper, things like chained linq methods or multi-line ternary operators have a strange indent that pushes them way to the right). Is there any way to mark up parts of a file to tell ReSharper not to format that area? I'm hoping for some kind of markup similar to how ReSharper suppresses other warnings/features. If not, is there some way of changing a combination of settings to get ReSharper to format the indenting correctly? EDIT: I have found this post from the ReSharper forums that says that generated code sections (as defined in the ReSharper options page) are ignored in code cleanup. Having tried it though, it doesn't seem to get ignored.

    Read the article

  • How can I resolve exception: Error executing child request for ChartImg.axd

    - by macleojw
    I'm using the Chart Controls for Mircosoft .NET Framerwork. Most of the time they work perfectly. However, if I leave a page for longer than 20-30 mins and then try to refresh the page, I get an error saying: Error executing child request for ChartImg.axd. Exception Details: System.Web.HttpException: Error executing child request for ChartImg.axd. If I update the page using an AJAX update panel I get the following error: Sys.WebForms.PageRequestManagerServerErrorException: Error executing child request for ChartImg.axd It seems that the chart handler stops after a period of inactivity. Most of the webpages I've looked at for this error are for situations when this error is displayed all the time. In my case it is only displayed after a period of inactivity. Can someone provide a better explanation of what is happening and suggest a solution?

    Read the article

  • java.sql.SQLException: SQL logic error or missing database

    - by Sunil Kumar Sahoo
    Hi All, I ahve created database connection with SQLite using JDBC in java. My sql statements execute properly. But sometimes I get the following error while i use conn.commit() java.sql.SQLException: SQL logic error or missing database Can anyone please help me how to avoid this type of problem. Can anyone give me better approach of calling JDBC programs Class.forName("org.sqlite.JDBC"); conn = DriverManager.getConnection("jdbc:sqlite:/home/Data/database.db3"); conn.setAutoCommit(false); String query = "Update Chits set BlockedForChit = 0 where ServerChitID = '" + serverChitId + "' AND ChitGatewayID = '" + chitGatewayId + "'"; Statement stmt = null; try { stmt.execute(query); conn.commit(); stmt.close(); stmt = null; } Thanks Sunil Kumar Sahoo

    Read the article

  • Render SSRS .RDL to PDF and just open it.

    - by FelipeFiali
    I tried to be the most descriptive I could. Ok, so since I've never done anything like this before, I guess it's better to ask before starting at all to just don't get the whole process wrong. We have a report that has to be called from an ASP.NET application, it could be a simple button. Through the code-behind, I gotta pass in a paramater, render the report, and don't save it anywhere, just open it to the user, as a .pdf file so he could save or print it. So please, give me links or tips, I'm reading MSDN documentation right now... If you have any suggestion on other technologies, as for the report, I'd like to hear it, but it has to be called from ASP.NET. Thanks in advance!

    Read the article

  • Scipy interpolation on a numpy array

    - by dassouki
    I have a lookup table that is defined the following way: TR_ua1 = np.array([ [3.6, 6.5, 9.1, 11.5, 13.8], [3.9, 7.3, 10.0, 13.1, 15.9], [4.5, 9.2, 12.2, 14.8, 18.2] ]) The header row elements are (hh) <1,2,3,4,5+ The header column (inc) elements are <10000, 20000, 20001+ The user will input a value ex (1.3, 25,000) or (0.2, 50,000). Scipy.interpolate() should interpolate to determine the correct value. Currently, the only way i can do this is with a bunch of if/elifs as exemplified below. I'm pretty sure there is a better, more efficient way of doing this Here's what i've got so far import numpy as np from scipy import interplate if (ua == 1): if (inc <= low_inc): #low_inc = 10,000 if (hh <= 1): return TR_ua1[0][0] elif (hh >= 1 & hh < 2): return interpolate( (1,2), (TR_ua1[0][1], TR_ua1[0][2]) )

    Read the article

  • Comparison of the multiprocessing module and pyro?

    - by fivebells
    I use pyro for basic management of parallel jobs on a compute cluster. I just moved to a cluster where I will be responsible for using all the cores on each compute node. (On previous clusters, each core has been a separate node.) The python multiprocessing module seems like a good fit for this. I notice it can also be used for remote-process communication. If anyone has used both frameworks for remote-process communication, I'd be grateful to hear how they stack up against each other. The obvious benefit of the multiprocessing module is that it's built-in from 2.6. Apart from that, it's hard for me to tell which is better.

    Read the article

  • How to create a new LaTeX command that behaves something like \verb?

    - by NawaMan
    I have been using LaTeX from sometime now, but have never actually gotten my hands dirty declaring a new command, as I try to avoid that. However, I need to add monospace text often in my document and I use \verb for it which is fine, except that the verb font size is bigger than the normal text font. So I need to change the font size and undo done like \small{}\verb#My monospace code#\normalsize{}. This is not very convenient and mistake-prone. Is there a better way to do this? Can I define a new command for this? How?

    Read the article

  • Client-Side script to upload attachments to the Sharepoint 2007 list

    - by Clone of Anton Makrushin
    Hello. I have no good script-writing experience. So, I have a list created on MOSS 2007 with about 1000 elements and attachments enabled. I need to attach to each list item file (*.jpg) from a local folder. I doesn't have administrator privileges at MOSS server, only contributor rights Here is my script: $web = new-Object system.Net.WebClient $web.Credentials = [System.Net.CredentialCache]::DefaultCredentials $web.Headers.Add("user-agent", "PowerShell Script") $web.UploadFile('http://ruglbsrvsps/IT/Lists/Test1/', 'C:\temp\Attachments\14\Img1.jpg' ) Test1 - target list; Item1, Item2, Item3 - list items, without attachments, created manually When I run script, it returns byte array and does not upload file to the list item. Can you fix my script or advice better solution for my task (attach bulk of files to the MOSS list items, only contributor rights for target Sharepoint 2007 list) Thank you.

    Read the article

  • How do I base a style on a Silverlight toolkit theme style

    - by Ian Oakes
    I've being trying to add a theme from the Silverlight toolkit to a project. In the project there are a number of existing styles used in the layout. The problem is when any control has an explict style applied to it does not receive any attributes of the style from the theme. In WPF I would use something like BasedOn={x:Type TextBox}, but this is not supported in Silverlight. I've considered going through the theme and setting a key for every style and then using BasedOn to create both an implicit style to use with the ImplictStyleManager, as well as another explicit style for use with the existing styled controls. Have you got any better ideas?

    Read the article

  • Arguments of using WCF/OData as access layer instead of EF/L2S/nHibernate directly

    - by Carl Hörberg
    We develop mostly low traffic but highly specialized web applications. Normally we use L2S, EF or nHibernate as access layer and then throws Asp.Net MVC to it and in which for normal crud operations we query the ISession/DataContext directly but for more advanced functions/side effects we put it in a some kind of service layer. Now, i was think about publishing the data through OData (WCF Data Service) and query that from the controllers (or even from jQuery when the a good template engine shows up) and publish the service operations through a WCF service (or as custom methods on the WCF Data Service?). What advantages/disadvantages does this architecture poses? Do I gain something except higher complexity and latency? Better separations of concerns (or is it just a illusion)?

    Read the article

  • Logging IP address for uniqueness without storing the IP address itself for privacy

    - by szabgab
    In a web application when logging some data I'd like to make sure I can identify data that came at differetn times but from the same IP address. On the other hand for privacy concerns as the data will be released publicly I'd like to make sure the actual IP cannot be retrieved. So I need some one way mapping of the IP addresses to some other strings that ensures 1-1 mapping. If I understand correctly then MD5, SHA1 or SHA256 could be a solution. I wonder if they are not too expensive in terms of processing needed? I'd be interested in any solution though if there is implementation in Perl that would be even better.

    Read the article

  • How to calculate line count for project

    - by sixtyfootersdude
    I am taking a software engineering class right now. Our assignment is to evaluate Mozilla's Thunderbird. Our assignment is to evaluate the size of Thunderbird. One metric that we need to use is the number of lines of code in the project. (Lines of code meaning not including comments or new lines). Is there a standard way to find the number of lines or am I better off just cracking out a script to do this? I think that I could do something like this: # remove all comments find -name *.java | \ sed "/\/*/,\*\// s/.*//g | \ # remove multiline comments sed s/\/\///g # remove single line comments # count not empty lines find -name *.java | grep -c "<character>" But I would need to do that for each file type. It seems like there should be some utility that does that already. (something mac/unix compatible would be preferable).

    Read the article

  • How should I pass an object wrapping an API to a class using that API?

    - by Billy ONeal
    Hello everyone :) This is a revised/better written version of the question I asked earlier today -- that question is deleted now. I have a project where I'm getting started with Google Mock. I have created a class, and that class calls functions whithin the Windows API. I've also created a wrapper class with virtual functions wrapping the Windows API, as described in the Google Mock CheatSheet. I'm confused however at how I should pass the wrapper into my class that uses that object. Obviously that object needs to be polymorphic, so I can't pass it by value, forcing me to pass a pointer. That in and of itself is not a problem, but I'm confused as to who should own the pointer to the class wrapping the API. So... how should I pass the wrapper class into the real class to facilitate mocking?

    Read the article

  • Annotating axis in ggplot2

    - by mpiktas
    I am looking for the way to annotate axis in ggplot2. The example of the problem can be found here: http://learnr.wordpress.com/2009/09/24/ggplot2-back-to-back-bar-charts. The y axis of the chart (example graph in the link) has an annotation: (million euro). Is there a way to create such types of annotations in ggplot2? Looking at the documentation there is no obvious way, since the ggplot does not explicitly let you put objects outside plotting area. But maybe there is some workaround? One of the possible workarounds I thought about is using scales: data=data.frame(x=1:10,y=1:10) qplot(x=x,y=y,data=data)+scale_y_continuous(breaks=10.1,label="Millions") But then how do I remove the tick? And it seems that since ggplot does not support multiple scales, I will need to grab the output of the scale_y_continuous, when it calculates the scales automaticaly and then add my custom break and label by hand. Maybe there is a better way?

    Read the article

< Previous Page | 491 492 493 494 495 496 497 498 499 500 501 502  | Next Page >