Search Results

Search found 1555 results on 63 pages for 'filtering'.

Page 21/63 | < Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >

  • Offlineimap -- push changes to all folders; only pull from INBOX folder

    - by g33kz0r
    I would like to be able to set up offlineimap to do the following Sync Remote/INBOX - Local Sync Local/Maildirs/* - Remote Possible? The use case here is: I download all new mail from my remote IMAP INBOX folder with offlineimap. offlineimap's posthook command calls a custom python script which does junk filtering, then sorts and categorizes my mail in the local INBOX folder to various local maildirs based on sender, etc. I read my mail with mutt and perhaps do some more categorization. ? Step 4 is what I'm after. I want offlineimap to push my local changes (categorization, filtering, deletion in the case of spam) back to the various folders on the imap server, but as you can see, there's no need for me to be pulling any changes from folders other than Remote/INBOX, as no changes happen on the IMAP server itself. I hope that's a clear explanation of the problem.

    Read the article

  • How to move mail among Google Apps for Domains users

    - by Paul Roub
    Considering moving the domain used by my extended family for email to Google Apps. One less server for me to manage, better spam filtering, etc. One thing that's been nice about running my own has been the way I manage my kids' incoming email - it comes to me first, and I drop good mail in a symlinked IMAP folder that we share. A little procmail is all it takes, and straight-through exceptions are easy to implement. (FYI, no I'm not advocating censorship, but manually filtering spam and viruses from my 8-year-old's inbox seems like the right thing to do. YMMV) Anyway. I'm wondering if there's an easy way to do something similar in Google Apps - setting up filters to auto-redirect to me looks easy enough (any gotchas there?), but moving things back is not obvious. Yes, I could access both accounts via IMAP and drag mails across, but does anyone have an easier way?

    Read the article

  • Restrict only some plugins to specific sites in Google Chrome

    - by Christian
    I am looking for a way to set up Google Chrome so that it will run a certain plug-in (Java, what else?) only on whitelisted sites, but other plug-ins (like the PDF viewer) everywhere. From playing with the policies available for Chrome, I think there are basically two levels of plug-in management: List of disabled plugins/enabled plugins: Controls whether a plug-in exists for the browser at all This pair of policies applies to plug-ins, but not to sites. Default plug-in settings/Allow plug-ins on sites: Controls on which sites plug-ins can run This set of policies applies to sites, but not to individual plugins, and it cannot override the first pair. There appears to be no way to configure Chrome so that some plug-ins only run on whitelisted sites, but others run everywhere by default. I have also looked at filtering content on the firewall/proxy level, but I'm not convinced it can be done securely there. Filtering by URLs (file names) or content types can be circumvented trivially, and identification by content inspection cannot be safe either.

    Read the article

  • What is the best appliance you've used?

    - by phuzion
    Post your favorite appliances or "all-in-one" programs. Whether it runs in a virtual machine, or on its own hardware, it all goes. My submission is Untangle. It's an open source network gateway (their term). Essentially, it can run a plethora of things that you may otherwise end up buying another appliance for: Web filtering logging mail spam filtering phishing monitor spyware blocking VPN You name it, it's all there. Best of all, it's mostly free. A few appliances have annual costs due to inherent licensing or subscription costs. If you are looking for a new network perimeter device, definitely check it out. The underlying OS doesn't matter, because it's the application we want to praise, not the OS beneath it.

    Read the article

  • Having a problem with texturing vertices in WebGL, think parameters are off in the image?

    - by mathacka
    I'm having a problem texturing a simple rectangle in my WebGL program, I have the parameters set as follows: gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, gl.RGBA, gl.UNSIGNED_BYTE, textureImage); I'm using this image: On the properties of this image it says it's 32 bit depth, so that should take care of the gl.UNSIGNED_BYTE, and I've tried both gl.RGBA and gl.RGB to see if it's not reading the transparency. It is a 32x32 pixel image, so it's power of 2. And I've tried almost all the combinations of formats and types, but I'm not sure if this is the answer or not. I'm getting these two errors in the chrome console: INVALID_VALUE: texImage2D: invalid image (index):101 WebGL: drawArrays: texture bound to texture unit 0 is not renderable. It maybe non-power-of-2 and have incompatible texture filtering or is not 'texture complete'. Or the texture is Float or Half Float type with linear filtering while OES_float_linear or OES_half_float_linear extension is not enabled. the drawArrays function is simply: "gl.drawArrays(gl.TRIANGLES, 0, 6);" using 6 vertices to make a rectangle.

    Read the article

  • Ubuntu 12.04 taking too much time to boot

    - by adarshdinesh
    Ubuntu 12.04 is taking much time for booting, Here is the system kernel message while booting .It is showing that some anacron was killed ,why ? and how to fix the problem ? [ 2.241047] scsi6 : usb-storage 2-1.6:1.0 [ 2.241501] usbcore: registered new interface driver usb-storage [ 2.241895] USB Mass Storage support registered. [ 3.240670] scsi 6:0:0:0: Direct-Access Multiple Card Reader 1.00 PQ: 0 ANSI: 0 [ 3.241791] sd 6:0:0:0: Attached scsi generic sg2 type 0 [ 3.243083] sd 6:0:0:0: [sdb] Attached SCSI removable disk [ 12.568641] Adding 4037904k swap on /dev/sda3. Priority:-1 extents:1 across:4037904k [ 12.615014] udevd[462]: starting version 175 [ 12.651334] mei: module is from the staging directory, the quality is unknown, you have been warned. [ 12.655283] [drm] Initialized drm 1.1.0 20060810 ................... [ 14.118369] init: alsa-restore main process (982) terminated with status 19 [ 14.252595] init: anacron main process (1033) killed by TERM signal [ 14.285763] HDMI status: Codec=3 Pin=5 Presence_Detect=0 ELD_Valid=0 [ 14.285841] input: HDA Intel PCH HDMI/DP,pcm=3 as /devices/pci0000:00/0000:00:1b.0/sound/card0/input8 [ 14.285925] input: HDA Intel PCH Mic as /devices/pci0000:00/0000:00:1b.0/sound/card0/input9 [ 14.285991] input: HDA Intel PCH Headphone as /devices/pci0000:00/0000:00:1b.0/sound/card0/input10 [ 14.615073] init: plymouth-stop pre-start process (1222) terminated with status 1 [ 16.447287] wlan0: authenticate with c0:8a:de:7c:60:e8 (try 1) [ 16.448858] wlan0: authenticated [ 16.453405] wlan0: associate with c0:8a:de:7c:60:e8 (try 1) [ 16.456392] wlan0: RX AssocResp from c0:8a:de:7c:60:e8 (capab=0x431 status=0 aid=2) [ 16.456398] wlan0: associated [ 16.457014] ieee80211 phy0: brcms_ops_bss_info_changed: qos enabled: true (implement) [ 16.457017] ieee80211 phy0: brcmsmac: brcms_ops_bss_info_changed: associated [ 16.457019] ieee80211 phy0: changing basic rates failed: -22 [ 16.457021] ieee80211 phy0: brcms_ops_bss_info_changed: arp filtering: enabled true, count 0 (implement) [ 16.457226] ADDRCONF(NETDEV_CHANGE): wlan0: link becomes ready [ 16.654196] ieee80211 phy0: brcms_ops_bss_info_changed: arp filtering: enabled true, count 1 (implement) [ 17.823565] ieee80211 phy0: wl0: brcms_c_d11hdrs_mac80211: txop exceeded phylen 180/256 dur 1946/1504 [ 18.220865] ieee80211 phy0: brcms_ops_bss_info_changed: qos enabled: true (implement) [ 26.881422] wlan0: no IPv6 routers present [ 68.228293] ieee80211 phy0: brcms_ops_bss_info_changed: qos enabled: true (implement) [ 73.240133] ieee80211 phy0: brcms_ops_bss_info_changed: qos enabled: true (implement) [ 76.574490] ieee80211 phy0: brcms_ops_bss_info_changed: qos enabled: true (implement) [ 102.180006] ieee80211 phy0: brcms_ops_bss_info_changed: qos enabled: true (implement) [ 103.100984] ieee80211 phy0: brcms_ops_bss_info_changed: qos enabled: true (implement) [ 124.171624] ieee80211 phy0: brcms_ops_bss_info_changed: qos enabled: true (implement)

    Read the article

  • Top 10 solution documents for Weblogic Server J2EE Feb 2014 - May 2014

    - by jhpierce -Oracle
    The following are the top 10 documents linked to SRs as solutions, for Weblogic Server J2EE issues, from Feb 2014 thru May 2014. 1163020.1 How to configure Filtering class loader in weblogic.xml   To configure the Filtering Class Loader to specify a certain package is loaded from an application, add a prefer-application-packages descriptor element. 1276593.1 WLS - How to supress servlet/JSP version details In WebLogic HTTP response header The string "X-Powered-By: Servlet/2.4 JSP/2.0" is showing up in the servlet response header.How to stop Weblogic from including servlet/JSP version details in the x-powered-by HTTP response header. 1490080.1 WebLogic Server 12.1.1.0 in a Cluster Environment Throws NotSerializableException for CDI Applications at com.sun.jersey.server.impl.cdi.CDIExtension When running in clustered environment, server start-up is not clean when you have CDI applications deployed. 1268138.1 Sample TwoWay SSL implementation for JAX-WS Webservice!   In this sample provided the recipient checks for the initiator's public certificate. Note that the client certificate can be used for authentication. 1584779.1 Socket Leaks When Calling Web-Service Over SSL This is a known bug 16810786 1598617.1 Secure WebService call throwing CANNOT RESOLVE URL FOR PROTOCOL HTTP/HTTPS through web server(APACHE) plug-in.    1056121.1 How to Timeout Weblogic Webservice Client   How to timeout a WebService client with and without using Stubs. 1568638.1 When packaging Jersey JAX-RS libraries into webapp throws NoSuchMethodError()  When attempting to include custom Jersey implementation libraries in to web application in a OSB domain. 1118264.1 WLS 10.3: Intermittent XA error: XAResource.XAER_RMERR In WebLogic 10.3, a CMP EJB sometimes throws the exception.   1608951.1 How to get More Details About Error BEA-101215 Malformed Request. Request parsing failed Code: -1   Which was seen when accessing the application via loadbalancer?

    Read the article

  • Free 48 Page PDF Guide to Mastering Social Networking with Gwibber [Linux]

    - by Asian Angel
    Are you using Gwibber on your Linux system but not making full use of its’ potential? This free 48 page PDF guide will show you how to use and tweak Gwibber for the best performance when it comes to working with your social networks. Photo courtesy of Gwibber Blog. Examples of sections included in the guide are: Installing and getting started with Gwibber Becoming familiar with Gwibber’s UI Broadcasting and interacting on your social networks through Gwibber Filtering the flow of information Customizing the interface And more Here is an excerpt from the section on Filtering the Flow of Information. The step by step instructions combined with helpful, labeled screenshots make this a nice guide to have for anyone wanting to get the most out of Gwibber for their social networking needs. Note: Gwibber works with Twitter, identi.ca, StatusNet, Facebook, FriendFeed, Digg, Flickr, and Qaiku. Download the Master Social Networking with Gwibber PDF Guide [via OMG! Ubuntu!] *Note: In this instance this is a direct download of the PDF Guide itself. Visit the Gwibber Homepage HTG Explains: What Are Character Encodings and How Do They Differ?How To Make Disposable Sleeves for Your In-Ear MonitorsMacs Don’t Make You Creative! So Why Do Artists Really Love Apple?

    Read the article

  • Dlink DWA-643 ExpressCard / Atheros AR5008 can't connect to wifi networks

    - by Justin Kelly
    I've just purchased a D-Link DWA-643 Xtreme N ExpressCard Notebook Adapter - but it can't connect to my wireless network The card is listed on the FSF website and - refer links below: http://www.fsf.org/resources/hw/index_html/net/wireless/index_html/cards.html http://www.dlink.com.au/products/?pid=550 Ubuntu see the card as using the Atheros AR5008 chipset - refer image below The card lights up and I can see that available wifi networks using this card - so it seems to 'just work' on ubuntu 12.04 but when i try and connect to my networks - it fails I've tried setting the network to all the different options (WEP, WPA2, no encryption, etc.. b/g/n ) but ubuntu sill cant connect to it I've also installed wicd but still couldn't connect Has anyone got a DWA-643 to work in Ubuntu? Or does anyone have any suggestion on how to get it to connect?? Any help would be greatly appreciated Note: the laptop has built in wifi but its broadcom, works but with dialup speed connection - and i've had nothign but trouble using the boardcom drivers so purchased the FSF recommended PCI expresscard as i hoped it would 'just work' on the latest Ubuntu i've have tried to disable the built in wifi - broadcom - but even with the broadcom uninstall and unavailable it didnt help the dlink to connect previously I had MAC address filtering on the router - i've added the dlinks MAC - and also disabled MAC address filtering - still no luck lspci output below: 18:00.0 Network controller: Atheros Communications Inc. AR5008 Wireless Network Adapter (rev 01) Subsystem: D-Link System Inc Device 3a6f Flags: bus master, fast devsel, latency 0, IRQ 18 Memory at e4000000 (64-bit, non-prefetchable) [size=64K] Capabilities: [40] Power Management version 2 Capabilities: [50] MSI: Enable- Count=1/1 Maskable- 64bit- Capabilities: [60] Express Legacy Endpoint, MSI 00 Capabilities: [90] MSI-X: Enable- Count=1 Masked- Capabilities: [100] Advanced Error Reporting Capabilities: [140] Virtual Channel Kernel driver in use: ath9k Kernel modules: ath9k

    Read the article

  • Doing powerups in a component-based system

    - by deft_code
    I'm just starting really getting my head around component based design. I don't know what the "right" way to do this is. Here's the scenario. The player can equip a shield. The the shield is drawn as bubble around the player, it has a separate collision shape, and reduces the damage the player receives from area effects. How is such a shield architected in a component based game? Where I get confused is that the shield obviously has three components associated with it. Damage reduction / filtering A sprite A collider. To make it worse different shield variations could have even more behaviors, all of which could be components: boost player maximum health health regen projectile deflection etc Am I overthinking this? Should the shield just be a super component? I really think this is wrong answer. So if you think this is the way to go please explain. Should the shield be its own entity that tracks the location of the player? That might make it hard to implement the damage filtering. It also kinda blurs the lines between attached components and entities. Should the shield be a component that houses other components? I've never seen or heard of anything like this, but maybe it's common and I'm just not deep enough yet. Should the shield just be a set of components that get added to the player? Possibly with an extra component to manage the others, e.g. so they can all be removed as a group. (accidentally leave behind the damage reduction component, now that would be fun). Something else that's obvious to someone with more component experience?

    Read the article

  • Doing powerups in a component-based system

    - by deft_code
    I'm just starting really getting my head around component based design. I don't know what the "right" way to do this is. Here's the scenario. The player can equip a shield. The the shield is drawn as bubble around the player, it has a separate collision shape, and reduces the damage the player receives from area effects. How is such a shield architected in a component based game? Where I get confused is that the shield obviously has three components associated with it. Damage reduction / filtering A sprite A collider. To make it worse different shield variations could have even more behaviors, all of which could be components: boost player maximum health health regen projectile deflection etc Am I overthinking this? Should the shield just be a super component? I really think this is wrong answer. So if you think this is the way to go please explain. Should the shield be its own entity that tracks the location of the player? That might make it hard to implement the damage filtering. It also kinda blurs the lines between attached components and entities. Should the shield be a component that houses other components? I've never seen or heard of anything like this, but maybe it's common and I'm just not deep enough yet. Should the shield just be a set of components that get added to the player? Possibly with an extra component to manage the others, e.g. so they can all be removed as a group. (accidentally leave behind the damage reduction component, now that would be fun). Something else that's obvious to someone with more component experience?

    Read the article

  • Missing X-Spam-Status header

    - by Walt Stoneburner
    I recently upgraded to Ubuntu 14.04.1 LTS (trusty) and have followed the directions in https://help.ubuntu.com/14.04/serverguide/mail-filtering.html and am sending and receiving mail just fine. While I do see X-Virus-Scanned headers in my messages, which suggests mail is indeed being processed, I do not see any X-Spam-Level or X-Spam-Score headers being added to messages. This makes downstream procmailrc and client-side filtering ...more difficult. While having $final_spam_destiny = D_DISCARD in /etc/amavis/conf.d/20-debian_defaults does greatly reduce spam to my inbox, I had concerns of false-positives prior to tuning and didn't know were there going, so have set it to D_PASS for the time being. This exposed the problem. I'm not sure where to look to start diagnosing the problem (otherwise I'd post a suspect configuration file). /etc/amavis/conf.d/15-content_filter_mode has the lines uncommented to enable virus and spam checks, and virus checking appears to be working according to the headers. Spam Assassin certainly seems to be starting just fine, too. SpamAssassin debug facilities: info SA info: zoom: able to use 360/360 'body_0' compiled rules (100%) SpamAssassin loaded plugins: AskDNS, AutoLearnThreshold, Bayes, BodyEval, Check, DKIM, DNSEval, FreeMail, HTMLEval, HTTPSMismatch, Hashcash, HeaderEval, ImageInfo, MIMEEval, MIMEHeader, Pyzor, Razor2, RelayEval, ReplaceTags, Rule2XSBody, SPF, SpamCop, URIDNSBL, URIDetail, URIEval, VBounce, WLBLEval, WhiteListSubject SpamControl: init_pre_fork on SpamAssassin done I've also set $log_level = 2; in /etc/amavis/conf.d/50-user and don't see any obvious errors rolling by in the logs. Q: Any recommendations of what to try next? UPDATE (it appears that I have the right setting already): /etc/amavis/conf.d$ grep sa_tag_level_deflt * 20-debian_defaults:# $sa_tag_level_deflt = 2.0; # add spam info headers if at, or above that level 20-debian_defaults:$sa_tag_level_deflt = -999; # add spam info headers if at, or above that level

    Read the article

  • Displaying a Grid of Data in ASP.NET MVC

    One of the most common tasks we face as a web developers is displaying data in a grid. In its simplest incarnation, a grid merely displays information about a set of records - the orders placed by a particular customer, perhaps; however, most grids offer features like sorting, paging, and filtering to present the data in a more useful and readable manner. In ASP.NET WebForms the GridView control offers a quick and easy way to display a set of records in a grid, and offers features like sorting, paging, editing, and deleting with just a little extra work. On page load, the GridView automatically renders as an HTML <table> element, freeing you from having to write any markup and letting you focus instead on retrieving and binding the data to display to the GridView. In an ASP.NET MVC application, however, developers are on the hook for generating the markup rendered by each view. This task can be a bit daunting for developers new to ASP.NET MVC, especially those who have a background in WebForms. This is the first in a series of articles that explore how to display grids in an ASP.NET MVC application. This installment starts with a walk through of creating the ASP.NET MVC application and data access code used throughout this series. Next, it shows how to display a set of records in a simple grid. Future installments examine how to create richer grids that include sorting, paging, filtering, and client-side enhancements. We'll also look at pre-built grid solutions, like the Grid component in the MvcContrib project and JavaScript-based grids like jqGrid. But first things first - let's create an ASP.NET MVC application and see how to display database records in a web page. Read on to learn more! Read More >

    Read the article

  • Static / Shared Helper Functions vs Built-In Methods

    - by Nathan
    This is a simple question but a design consideration that I often run across in my day to day development work. Lets say that you have a class that represents some kinds of collection. Public Class ModifiedCustomerOrders Public Property Orders as List(Of ModifiedOrders) End Class Within this class you do all kinds of important work, such as combining many different information sources and, eventually, build the Modified Customer Orders. Now, you have different processes that consume this class, each of which needs a slightly different slice of the ModifiedCustomerOrders items. To enable this, you want to add filtering functionality. How do you go about this? Do you: Add Filtering calls to the ModifiedCustomerOrders class so that you can say: MyOrdersClass.RemoveCanceledOrders() Create a Static / Shared "tooling" class that allows you to call: OrdersFilters.RemoveCanceledOrders(MyOrders) Create an extension method to accomplish the same feat as #2 but with less typing: MyOrders.RemoveCanceledOrders() Create a "Service" method that handles the getting of Orders as appropriate to the calling function, while using one of the previous approaches "under the hood". OrdersService.GetOrdersForProcessA() Others? I tend to prefer the tooling / extension method approaches as they make testing a little bit simpler. Although I dependency inject all my sourcing data into the ModifiedCustomerOrders, having it as part of the class makes it a little bit more complicated to test. Typically, I choose to use extension methods where I am doing parameterless transformations / filters. As they get more complex, I will move it into a static class instead. Thoughts on this approach? How would you approach it?

    Read the article

  • Repository query conditions, dependencies and DRY

    - by vFragosop
    To keep it simple, let's suppose an application which has Accounts and Users. Each account may have any number of users. There's also 3 consumers of UserRepository: An admin interface which may list all users Public front-end which may list all users An account authenticated API which should only list it's own users Assuming UserRepository is something like this: class UsersRepository extends DatabaseAbstraction { private function query() { return $this->database()->select('users.*'); } public function getAll() { return $this->query()->exec(); } // IMPORTANT: // Tons of other methods for searching, filtering, // joining of other tables, ordering and such... } Keeping in mind the comment above, and the necessity to abstract user querying conditions, How should I handle querying of users filtering by account_id? I can picture three possible roads: 1. Should I create an AccountUsersRepository? class AccountUsersRepository extends UserRepository { public function __construct(Account $account) { $this->account = $account; } private function query() { return parent::query() ->where('account_id', '=', $this->account->id); } } This has the advantage of reducing the duplication of UsersRepository methods, but doesn't quite fit into anything I've read about DDD so far (I'm rookie by the way) 2. Should I put it as a method on AccountsRepository? class AccountsRepository extends DatabaseAbstraction { public function getAccountUsers(Account $account) { return $this->database() ->select('users.*') ->where('account_id', '=', $account->id) ->exec(); } } This requires the duplication of all UserRepository methods and may need another UserQuery layer, that implements those querying logic on chainable way. 3. Should I query UserRepository from within my account entity? class Account extends Entity { public function getUsers() { return UserRepository::findByAccountId($this->id); } } This feels more like an aggregate root for me, but introduces dependency of UserRepository on Account entity, which may violate a few principles. 4. Or am I missing the point completely? Maybe there's an even better solution? Footnotes: Besides permissions being a Service concern, in my understanding, they shouldn't implement SQL query but leave that to repositories since those may not even be SQL driven.

    Read the article

  • jQuery Datatable in MVC &hellip; extended.

    - by Steve Clements
    There are a million plugins for jQuery and when a web forms developer like myself works in MVC making use of them is par-for-the-course!  MVC is the way now, web forms are but a memory!! Grids / tables are my focus at the moment.  I don’t want to get in to righting reems of css and html, but it’s not acceptable to simply dump a table on the screen, functionality like sorting, paging, fixed header and perhaps filtering are expected behaviour.  What isn’t always required though is the massive functionality like editing etc you get with many grid plugins out there. You potentially spend a long time getting everything hooked together when you just don’t need it. That is where the jQuery DataTable plugin comes in.  It doesn’t have editing “out of the box” (you can add other plugins as you require to achieve such functionality). What it does though is very nicely format a table (and integrate with jQuery UI) without needing to hook up and Async actions etc.  Take a look here… http://www.datatables.net I did in the first instance start looking at the Telerik MVC grid control – I’m a fan of Telerik controls and if you are developing an in-house of open source app you get the MVC stuff for free…nice!  Their grid however is far more than I require.  Note: Using Telerik MVC controls with your own jQuery and jQuery UI does come with some hurdles, mainly to do with the order in which all your jQuery is executing – I won’t cover that here though – mainly because I don’t have a clear answer on the best way to solve it! One nice thing about the dataTable above is how easy it is to extend http://www.datatables.net/examples/plug-ins/plugin_api.html and there are some nifty examples on the site already… I however have a requirement that wasn’t on the site … I need a grid at the bottom of the page that will size automatically to the bottom of the page and be scrollable if required within its own space i.e. everything above the grid didn’t scroll as well.  Now a CSS master may have a great solution to this … I’m not that master and so didn’t! The content above the grid can vary so any kind of fixed positioning is out. So I wrote a little extension for the DataTable, hooked that up to the document.ready event and window.resize event. Initialising my dataTable ( s )… $(document).ready(function () {   var dTable = $(".tdata").dataTable({ "bPaginate": false, "bLengthChange": false, "bFilter": true, "bSort": true, "bInfo": false, "bAutoWidth": true, "sScrollY": "400px" });   My extension to the API to give me the resizing….   // ********************************************************************** // jQuery dataTable API extension to resize grid and adjust column sizes // $.fn.dataTableExt.oApi.fnSetHeightToBottom = function (oSettings) { var id = oSettings.nTable.id; var dt = $("#" + id); var top = dt.position().top; var winHeight = $(document).height(); var remain = (winHeight - top) - 83; dt.parent().attr("style", "overflow-x: auto; overflow-y: auto; height: " + remain + "px;"); this.fnAdjustColumnSizing(); } This is very much is debug mode, so pretty verbose at the moment – I’ll tidy that up later! You can see the last call is a call to an existing method, as the columns are fixed and that normally involves so CSS voodoo, a call to adjust those sizes is required. Just above is the style that the dataTable gives the grid wrapper div, I got that from some firebug action and stick in my new height. The –83 is to give me the space at the bottom i require for fixed footer!   Finally I hook that up to the load and window resize.  I’m actually using jQuery UI tabs as well, so I’ve got that in the open event of the tabs.   $(document).ready(function () { var oTable; $("#tabs").tabs({ "show": function (event, ui) { oTable = $('div.dataTables_scrollBody>table.tdata', ui.panel).dataTable(); if (oTable.length > 0) { oTable.fnSetHeightToBottom(); } } }); $(window).bind("resize", function () { oTable.fnSetHeightToBottom(); }); }); And that all there is too it.  Testament to the wonders of jQuery and the immense community surrounding it – to which I am extremely grateful. I’ve also hooked up some custom column filtering on the grid – pretty normal stuff though – you can get what you need for that from their website.  I do hide the out of the box filter input as I wanted column specific, you need filtering turned on when initialising to get it to work and that input come with it!  Tip: fnFilter is the method you want.  With column index as a param – I used data tags to simply that one.

    Read the article

  • Content Query Web Part and the Yes/No Field

    - by Bil Simser
    The Content Query Web Part (CQWP) is a pretty powerful beast. It allows you to do multiple site queries and aggregate the results. This is great for rolling up content and doing some summary type reporting. Here’s a trick to remember about Yes/No fields and using the CQWP. If you’re building a news style site and want to aggregate say all the announcements that people tag a certain way, up onto the home page this might be a solution. First we need to allow a way for users of all our sites to mark an announcement for inclusion on our Intranet Home Page. We’ll do this by just modifying the Announcement Content type and adding a Yes/No field to it. There are alternate ways of doing this like building a new Announcement type or stapling a feature to all sites to add our column but this is pretty low impact and only affects our current site collection so let’s go with it for now, okay? You can berate me in the comments about the proper way I should have done this part. Go to the Site Settings for the Site Collection and click on Site Content Types under the Galleries. This takes you to the gallery for this site and all subsites. Scroll down until you see the List Content Types and click on Announcements. Now we’re modifying the Announcement content type which affects all those announcement lists that are created by default if you’re building sites using the Team Site template (or creating a new Announcements list on any site for that matter). Click on Add from new site column under the Column list. This will allow us to create a new Yes/No field that users will see in Announcement items. This field will allow the user to flag the announcement for inclusion on the home page. Feel free to modify the fields as you see fit for your environment, this is just an example. Now that we’ve added the column to our Announcements Content type we can go into any site that has an announcement list, modify that announcement and flag it to be included on our home page. See the new Featured column? That was the result of modifying our Announcements Content Type on this site collection. Now we can move onto the dirty part, displaying it in a CQWP on the home page. And here is where the fun begins (and the head scratching should end). On our home page we want to drop a Content Query Web Part and aggregate any Announcement that’s been flagged as Featured by the users (we could also add the filter to handle Expires so we don’t show old content so go ahead and do that if you want). First add a CQWP to the page then modify the settings for the web part. In the first section, Query, we want the List Type to be set to Announcements and the Content type to be Announcement so set your options like this: Click Apply and you’ll see the results display all Announcements from any site in the site collection. I have five team sites created each with a unique announcement added to them. Now comes the filtering. We don’t want to include every announcement, only ones users flag using that Featured column we added. At first blush you might scroll down to the Additional Filters part of the Query options and set the Featured column to be equal to Yes: This seems correct doesn’t it? After all, the column is a Yes/No column and looking at an announcement in the site, it displays the field as Yes or No: However after applying the filter you get this result: (I have the announcements from Team Site 1 and Team Site 4 flagged as Featured) Huh? It’s BACKWARDS! Let’s confirm that. Go back in and change the Additional Filters section from Yes to No and hit Apply and you get this: Wait a minute? Shouldn’t I see Team Site 1 and 4 if the logic is backwards? Why am I seeing the same thing as before. What gives… For whatever reason, unknown to me, a Yes/No field (even though it displays as such) really uses 1 and 0 behind the scenes. Yeah, someone was stuck on using integer values for booleans when they wrote SharePoint (probably after a long night of white boarding ways to mess with developers heads) and came up with this. The solution is pretty simple but not very discoverable. Set the filter to include your flagged items like so: And it will filter the items marked as Featured correctly giving you this result: This kind of solution could also be extended and enhanced. Here are a few suggestions and ideas: Modify the ItemStyle.xsl file to add a new style for this aggregation which would include the first few paragraphs of the body (or perhaps add another field to the Content type called Excerpt or Summary and display that instead) Add an Image column to the Announcement Content type to include a Picture field and display it in the summary Add a Category choice field (Employee News, Current Events, Headlines, etc.) and add multiple CQWPs to the home page filtering each one on a different category I know some may find this topic old and dusty but I didn’t see a lot out there specifically on filtering the Yes/No fields and the whole 1/0 trick was a little wonky, so I figured a few pictures would help walk through overcoming yet another SharePoint weirdness. With a little work and some creative juices you can easily us the power of aggregation and the CQWP to build a news site from content on your team sites.

    Read the article

  • How can I implement a database TableView like thing in C++?

    - by Industrial-antidepressant
    How can I implement a TableView like thing in C++? I want to emulating a tiny relation database like thing in C++. I have data tables, and I want to transform it somehow, so I need a TableView like class. I want filtering, sorting, freely add and remove items and transforming (ex. view as UPPERCASE and so on). The whole thing is inside a GUI application, so datatables and views are attached to a GUI (or HTML or something). So how can I identify an item in the view? How can I signal it when the table is changed? Is there some design pattern for this? Here is a simple table, and a simple data item: #include <string> #include <boost/multi_index_container.hpp> #include <boost/multi_index/member.hpp> #include <boost/multi_index/ordered_index.hpp> #include <boost/multi_index/random_access_index.hpp> using boost::multi_index_container; using namespace boost::multi_index; struct Data { Data() {} int id; std::string name; }; struct row{}; struct id{}; struct name{}; typedef boost::multi_index_container< Data, indexed_by< random_access<tag<row> >, ordered_unique<tag<id>, member<Data, int, &Data::id> >, ordered_unique<tag<name>, member<Data, std::string, &Data::name> > > > TDataTable; class DataTable { public: typedef Data item_type; typedef TDataTable::value_type value_type; typedef TDataTable::const_reference const_reference; typedef TDataTable::index<row>::type TRowIndex; typedef TDataTable::index<id>::type TIdIndex; typedef TDataTable::index<name>::type TNameIndex; typedef TRowIndex::iterator iterator; DataTable() : row_index(rule_table.get<row>()), id_index(rule_table.get<id>()), name_index(rule_table.get<name>()), row_index_writeable(rule_table.get<row>()) { } TDataTable::const_reference operator[](TDataTable::size_type n) const { return rule_table[n]; } std::pair<iterator,bool> push_back(const value_type& x) { return row_index_writeable.push_back(x); } iterator erase(iterator position) { return row_index_writeable.erase(position); } bool replace(iterator position,const value_type& x) { return row_index_writeable.replace(position, x); } template<typename InputIterator> void rearrange(InputIterator first) { return row_index_writeable.rearrange(first); } void print_table() const; unsigned size() const { return row_index.size(); } TDataTable rule_table; const TRowIndex& row_index; const TIdIndex& id_index; const TNameIndex& name_index; private: TRowIndex& row_index_writeable; }; class DataTableView { DataTableView(const DataTable& source_table) {} // How can I implement this? // I want filtering, sorting, signaling upper GUI layer, and sorting, and ... }; int main() { Data data1; data1.id = 1; data1.name = "name1"; Data data2; data2.id = 2; data2.name = "name2"; DataTable table; table.push_back(data1); DataTable::iterator it1 = table.row_index.iterator_to(table[0]); table.erase(it1); table.push_back(data1); Data new_data(table[0]); new_data.name = "new_name"; table.replace(table.row_index.iterator_to(table[0]), new_data); for (unsigned i = 0; i < table.size(); ++i) std::cout << table[i].name << std::endl; #if 0 // using scenarios: DataTableView table_view(table); table_view.fill_from_source(); // synchronization with source table_view.remove(data_item1); // remove item from view table_view.add(data_item2); // add item from source table table_view.filter(filterfunc); // filtering table_view.sort(sortfunc); // sorting // modifying from source_able, hot to signal the table_view? // FYI: Table view is atteched to a GUI item table.erase(data); table.replace(data); #endif return 0; }

    Read the article

  • Announcing the release of the Windows Azure SDK 2.1 for .NET

    - by ScottGu
    Today we released the v2.1 update of the Windows Azure SDK for .NET.  This is a major refresh of the Windows Azure SDK and it includes some great new features and enhancements. These new capabilities include: Visual Studio 2013 Preview Support: The Windows Azure SDK now supports using the new VS 2013 Preview Visual Studio 2013 VM Image: Windows Azure now has a built-in VM image that you can use to host and develop with VS 2013 in the cloud Visual Studio Server Explorer Enhancements: Redesigned with improved filtering and auto-loading of subscription resources Virtual Machines: Start and Stop VM’s w/suspend billing directly from within Visual Studio Cloud Services: New Emulator Express option with reduced footprint and Run as Normal User support Service Bus: New high availability options, Notification Hub support, Improved VS tooling PowerShell Automation: Lots of new PowerShell commands for automating Web Sites, Cloud Services, VMs and more All of these SDK enhancements are now available to start using immediately and you can download the SDK from the Windows Azure .NET Developer Center.  Visual Studio’s Team Foundation Service (http://tfs.visualstudio.com/) has also been updated to support today’s SDK 2.1 release, and the SDK 2.1 features can now be used with it (including with automated builds + tests). Below are more details on the new features and capabilities released today: Visual Studio 2013 Preview Support Today’s Window Azure SDK 2.1 release adds support for the recent Visual Studio 2013 Preview. The 2.1 SDK also works with Visual Studio 2010 and Visual Studio 2012, and works side by side with the previous Windows Azure SDK 1.8 and 2.0 releases. To install the Windows Azure SDK 2.1 on your local computer, choose the “install the sdk” link from the Windows Azure .NET Developer Center. Then, chose which version of Visual Studio you want to use it with.  Clicking the third link will install the SDK with the latest VS 2013 Preview: If you don’t already have the Visual Studio 2013 Preview installed on your machine, this will also install Visual Studio Express 2013 Preview for Web. Visual Studio 2013 VM Image Hosted in the Cloud One of the requests we’ve heard from several customers has been to have the ability to host Visual Studio within the cloud (avoiding the need to install anything locally on your computer). With today’s SDK update we’ve added a new VM image to the Windows Azure VM Gallery that has Visual Studio Ultimate 2013 Preview, SharePoint 2013, SQL Server 2012 Express and the Windows Azure 2.1 SDK already installed on it.  This provides a really easy way to create a development environment in the cloud with the latest tools. With the recent shutdown and suspend billing feature we shipped on Windows Azure last month, you can spin up the image only when you want to do active development, and then shut down the virtual machine and not have to worry about usage charges while the virtual machine is not in use. You can create your own VS image in the cloud by using the New->Compute->Virtual Machine->From Gallery menu within the Windows Azure Management Portal, and then by selecting the “Visual Studio Ultimate 2013 Preview” template: Visual Studio Server Explorer: Improved Filtering/Management of Subscription Resources With the Windows Azure SDK 2.1 release you’ll notice significant improvements in the Visual Studio Server Explorer. The explorer has been redesigned so that all Windows Azure services are now contained under a single Windows Azure node.  From the top level node you can now manage your Windows Azure credentials, import a subscription file or filter Server Explorer to only show services from particular subscriptions or regions. Note: The Web Sites and Mobile Services nodes will appear outside the Windows Azure Node until the final release of VS 2013. If you have installed the ASP.NET and Web Tools Preview Refresh, though, the Web Sites node will appear inside the Windows Azure node even with the VS 2013 Preview. Once your subscription information is added, Windows Azure services from all your subscriptions are automatically enumerated in the Server Explorer. You no longer need to manually add services to Server Explorer individually. This provides a convenient way of viewing all of your cloud services, storage accounts, service bus namespaces, virtual machines, and web sites from one location: Subscription and Region Filtering Support Using the Windows Azure node in Server Explorer, you can also now filter your Windows Azure services in the Server Explorer by the subscription or region they are in.  If you have multiple subscriptions but need to focus your attention to just a few subscription for some period of time, this a handy way to hide the services from other subscriptions view until they become relevant. You can do the same sort of filtering by region. To enable this, just select “Filter Services” from the context menu on the Windows Azure node: Then choose the subscriptions and/or regions you want to filter by. In the below example, I’ve decided to show services from my pay-as-you-go subscription within the East US region: Visual Studio will then automatically filter the items that show up in the Server Explorer appropriately: With storage accounts and service bus namespaces, you sometimes need to work with services outside your subscription. To accommodate that scenario, those services allow you to attach an external account (from the context menu). You’ll notice that external accounts have a slightly different icon in server explorer to indicate they are from outside your subscription. Other Improvements We’ve also improved the Server Explorer by adding additional properties and actions to the service exposed. You now have access to most of the properties on a cloud service, deployment slot, role or role instance as well as the properties on storage accounts, virtual machines and web sites. Just select the object of interest in Server Explorer and view the properties in the property pane. We also now have full support for creating/deleting/update storage tables, blobs and queues from directly within Server Explorer.  Simply right-click on the appropriate storage account node and you can create them directly within Visual Studio: Virtual Machines: Start/Stop within Visual Studio Virtual Machines now have context menu actions that allow you start, shutdown, restart and delete a Virtual Machine directly within the Visual Studio Server Explorer. The shutdown action enables you to shut down the virtual machine and suspend billing when the VM is not is use, and easily restart it when you need it: This is especially useful in Dev/Test scenarios where you can start a VM – such as a SQL Server – during your development session and then shut it down / suspend billing when you are not developing (and no longer be billed for it). You can also now directly remote desktop into VMs using the “Connect using Remote Desktop” context menu command in VS Server Explorer.  Cloud Services: Emulator Express with Run as Normal User Support You can now launch Visual Studio and run your cloud services locally as a Normal User (without having to elevate to an administrator account) using a new Emulator Express option included as a preview feature with this SDK release.  Emulator Express is a version of the Windows Azure Compute Emulator that runs a restricted mode – one instance per role – and it doesn’t require administrative permissions and uses 40% less resources than the full Windows Azure Emulator. Emulator Express supports both web and worker roles. To run your application locally using the Emulator Express option, simply change the following settings in the Windows Azure project. On the shortcut menu for the Windows Azure project, choose Properties, and then choose the Web tab. Check the setting for IIS (Internet Information Services). Make sure that the option is set to IIS Express, not the full version of IIS. Emulator Express is not compatible with full IIS. On the Web tab, choose the option for Emulator Express. Service Bus: Notification Hubs With the Windows Azure SDK 2.1 release we are adding support for Windows Azure Notification Hubs as part of our official Windows Azure SDK, inside of Microsoft.ServiceBus.dll (previously the Notification Hub functionality was in a preview assembly). You are now able to create, update and delete Notification Hubs programmatically, manage your device registrations, and send push notifications to all your mobile clients across all platforms (Windows Store, Windows Phone 8, iOS, and Android). Learn more about Notification Hubs on MSDN here, or watch the Notification Hubs //BUILD/ presentation here. Service Bus: Paired Namespaces One of the new features included with today’s Windows Azure SDK 2.1 release is support for Service Bus “Paired Namespaces”.  Paired Namespaces enable you to better handle situations where a Service Bus service namespace becomes unavailable (for example: due to connectivity issues or an outage) and you are unable to send or receive messages to the namespace hosting the queue, topic, or subscription. Previously,to handle this scenario you had to manually setup separate namespaces that can act as a backup, then implement manual failover and retry logic which was sometimes tricky to get right. Service Bus now supports Paired Namespaces, which enables you to connect two namespaces together. When you activate the secondary namespace, messages are stored in the secondary queue for delivery to the primary queue at a later time. If the primary container (namespace) becomes unavailable for some reason, automatic failover enables the messages in the secondary queue. For detailed information about paired namespaces and high availability, see the new topic Asynchronous Messaging Patterns and High Availability. Service Bus: Tooling Improvements In this release, the Windows Azure Tools for Visual Studio contain several enhancements and changes to the management of Service Bus messaging entities using Visual Studio’s Server Explorer. The most noticeable change is that the Service Bus node is now integrated into the Windows Azure node, and supports integrated subscription management. Additionally, there has been a change to the code generated by the Windows Azure Worker Role with Service Bus Queue project template. This code now uses an event-driven “message pump” programming model using the QueueClient.OnMessage method. PowerShell: Tons of New Automation Commands Since my last blog post on the previous Windows Azure SDK 2.0 release, we’ve updated Windows Azure PowerShell (which is a separate download) five times. You can find the full change log here. We’ve added new cmdlets in the following areas: China instance and Windows Azure Pack support Environment Configuration VMs Cloud Services Web Sites Storage SQL Azure Service Bus China Instance and Windows Azure Pack We now support the following cmdlets for the China instance and Windows Azure Pack, respectively: China Instance: Web Sites, Service Bus, Storage, Cloud Service, VMs, Network Windows Azure Pack: Web Sites, Service Bus We will have full cmdlet support for these two Windows Azure environments in PowerShell in the near future. Virtual Machines: Stop/Start Virtual Machines Similar to the Start/Stop VM capability in VS Server Explorer, you can now stop your VM and suspend billing: If you want to keep the original behavior of keeping your stopped VM provisioned, you can pass in the -StayProvisioned switch parameter. Virtual Machines: VM endpoint ACLs We’ve added and updated a bunch of cmdlets for you to configure fine-grained network ACL on your VM endpoints. You can use the following cmdlets to create ACL config and apply them to a VM endpoint: New-AzureAclConfig Get-AzureAclConfig Set-AzureAclConfig Remove-AzureAclConfig Add-AzureEndpoint -ACL Set-AzureEndpoint –ACL The following example shows how to add an ACL rule to an existing endpoint of a VM. Other improvements for Virtual Machine management includes Added -NoWinRMEndpoint parameter to New-AzureQuickVM and Add-AzureProvisioningConfig to disable Windows Remote Management Added -DirectServerReturn parameter to Add-AzureEndpoint and Set-AzureEndpoint to enable/disable direct server return Added Set-AzureLoadBalancedEndpoint cmdlet to modify load balanced endpoints Cloud Services: Remote Desktop and Diagnostics Remote Desktop and Diagnostics are popular debugging options for Cloud Services. We’ve introduced cmdlets to help you configure these two Cloud Service extensions from Windows Azure PowerShell. Windows Azure Cloud Services Remote Desktop extension: New-AzureServiceRemoteDesktopExtensionConfig Get-AzureServiceRemoteDesktopExtension Set-AzureServiceRemoteDesktopExtension Remove-AzureServiceRemoteDesktopExtension Windows Azure Cloud Services Diagnostics extension New-AzureServiceDiagnosticsExtensionConfig Get-AzureServiceDiagnosticsExtension Set-AzureServiceDiagnosticsExtension Remove-AzureServiceDiagnosticsExtension The following example shows how to enable Remote Desktop for a Cloud Service. Web Sites: Diagnostics With our last SDK update, we introduced the Get-AzureWebsiteLog –Tail cmdlet to get the log streaming of your Web Sites. Recently, we’ve also added cmdlets to configure Web Site application diagnostics: Enable-AzureWebsiteApplicationDiagnostic Disable-AzureWebsiteApplicationDiagnostic The following 2 examples show how to enable application diagnostics to the file system and a Windows Azure Storage Table: SQL Database Previously, you had to know the SQL Database server admin username and password if you want to manage the database in that SQL Database server. Recently, we’ve made the experience much easier by not requiring the admin credential if the database server is in your subscription. So you can simply specify the -ServerName parameter to tell Windows Azure PowerShell which server you want to use for the following cmdlets. Get-AzureSqlDatabase New-AzureSqlDatabase Remove-AzureSqlDatabase Set-AzureSqlDatabase We’ve also added a -AllowAllAzureServices parameter to New-AzureSqlDatabaseServerFirewallRule so that you can easily add a firewall rule to whitelist all Windows Azure IP addresses. Besides the above experience improvements, we’ve also added cmdlets get the database server quota and set the database service objective. Check out the following cmdlets for details. Get-AzureSqlDatabaseServerQuota Get-AzureSqlDatabaseServiceObjective Set-AzureSqlDatabase –ServiceObjective Storage and Service Bus Other new cmdlets include Storage: CRUD cmdlets for Azure Tables and Queues Service Bus: Cmdlets for managing authorization rules on your Service Bus Namespace, Queue, Topic, Relay and NotificationHub Summary Today’s release includes a bunch of great features that enable you to build even better cloud solutions.  All the above features/enhancements are shipped and available to use immediately as part of the 2.1 release of the Windows Azure SDK for .NET. If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Windows Azure Developer Center to learn more about how to build apps with it. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Sync database with filter using SyncOrchestrator with Sync Framework 2.0

    - by Flo
    Hi, I want to synchronize two SQL databases. But since one of the databases only requires a subset of the data I am looking for a filter option. Is there a possibility to add a Filter to the SyncOrchestrator or do I have to add the filter to the SyncProvider? According to this: http://social.microsoft.com/Forums/en-US/uklaunch2007ado.net/thread/35d4deb8-a861-4fe3-a395-d175e14c353f it is not possible to filter with the DbSyncProvider. Quote: "I understand your scenario, and the hebavior of the DbSyncProvider is due to the current limitation. DbSyncProvider is built on top of the Microsoft Sync Framework that can support filtering. Unfortunately, DbSyncProvider does not yet." But that post is quite old, maybe that has changed now. I am working with this example at the moment: http://msdn.microsoft.com/en-us/library/cc807255.aspx but I can't figure out how to add filtering here.

    Read the article

  • Building the WAR with m2eclipse in combination with WTP (handling webResources)

    - by waxwing
    I have a situation where I have a web application that is built using maven (i.e., maven-war-plugin). For each code modification, we have had to manually launch maven and restart the application server. Now, to reduce build cycle overhead, I want to use WTP to publish the webapp. Now, we have resource processing with Maven, and there are some additional Maven tasks defined in our POM when building the webapp. Therefore m2eclipse seems like a natural solution. I have gotten far enough that the Maven builder is running these tasks and filtering resources correctly. However, when I choose "Run on Server", the WAR file does not look like it would if I built it in Maven. I am guessing that this is because WTP actually builds the WAR, and not the m2eclipse builder. So even though we have configured the maven-war-plugin in our POM, those settings are not used. Below is a snippet with our maven-war-plugin configuration. What is configured under "webResources" is not picked up, it appears: <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-war-plugin</artifactId> <version>2.1-alpha-2</version> <configuration> <outputDirectory>${project.build.directory}</outputDirectory> <workDirectory>${project.build.directory}/work</workDirectory> <webappDirectory>${project.build.webappDirectory}</webappDirectory> <cacheFile>${project.build.webappDirectory}/webapp-cache.xml</cacheFile> <filteringDeploymentDescriptors>true</filteringDeploymentDescriptors> <nonFilteredFileExtensions> <nonFilteredFileExtension>pdf</nonFilteredFileExtension> <nonFilteredFileExtension>png</nonFilteredFileExtension> <nonFilteredFileExtension>gif</nonFilteredFileExtension> <nonFilteredFileExtension>jsp</nonFilteredFileExtension> </nonFilteredFileExtensions> <webResources> <!-- Add generated WSDL:s and XSD:s for the web service api. --> <resource> <directory>${project.build.directory}/jaxws/wsgen/wsdl</directory> <targetPath>WEB-INF/wsdl</targetPath> <filtering>false</filtering> <includes> <include>**/*</include> </includes> </resource> </webResources> </configuration> Do I need to reconfigure these resources to be handled elsewhere, or is there a better solution?

    Read the article

  • Removing duplicates without overriding hash method

    - by Javi
    Hello, I have a List which contains a list of objects and I want to remove from this list all the elements which have the same values in two of their attributes. I had though about doing something like this: List<Class1> myList; .... Set<Class1> mySet = new HashSet<Class1>(); mySet.addAll(myList); and overriding hash method in Class1 so it returns a number which depends only in the attributes I want to consider. The problem is that I need to do a different filtering in another part of the application so I can't override hash method in this way (I would need two different hash methods). What's the most efficient way of doing this filtering without overriding hash method? Thanks

    Read the article

  • Can a PL/pgSQL function contain a dynamic subquery?

    - by morpheous
    I am writing a PL/pgSQL function. The function has input parameters which specify (indirectly), which tables to read filtering information from. The function embeds business logic which allows it to select data from different tables based on the input arguments. The function dynamically builds a subquery which returns filtering data which is then used to run the main query. My questions are: Is it 'legal' to use a dynamic subquery in a PL/pgSQL function. I cant see why not - but this question is related to the next one. AFAIK, PL/pgSQL are cached or precompiled by the query engine. How does having a function that generates dynamic subqueries impact the work of the query engine?

    Read the article

  • Filter on button click D3.js?

    - by user1461701
    I'm working on a version of this bubble chart http://vallandingham.me/vis/gates/. As I'm using a different set of data - instead of the "All Grants" and "Grants by Year" buttons - I have one for 5 consecutive years - 2010, 2009 etc. My opening graph depicts the data for 2010 which is achieved by filtering the data from the .csv file , which contains the data for all 5 years. The problem I'm having is filtering and re-rendering the graph if another year is then chosen. I'd really appreciate it if anyone has any suggestions how I could simply achieve this? Thanks, Majella

    Read the article

  • cakephp filter index pages according to foreign keys

    - by Marki
    Hi there, I'm pretty new to CakePHP and was missing a crucial feature not generated as scaffold: filtering. What do I have to do to provide dropdowns or multi-selects on the index pages for each field that is a (foreign) key, thereby allowing to filter the table ("OR" inside multi-select, "AND" between different multi-selects, if any)? From what my websearch has shown me there are many more people trying to accomplish the same thing, although I couldn't find anything that would work for me because either they have text fields and do wildcard filtering, or the plugins they propose only work for 1.2 whereas i now started with 1.3 etc. etc. Can someone alleviate the confusion and maybe present some working code or direct me to the definitive guide[tm] where this matter has been solved? Thx

    Read the article

< Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >