Search Results

Search found 6198 results on 248 pages for 'traffic filtering'.

Page 10/248 | < Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >

  • will heavy network traffic affect other connections on HP ProCurve V1810-48G?

    - by nn4l
    I have a HP ProCurve V1810-48G switch with a few servers connected to it (everything in one rack). The switch is practically in its default configuration. During copying of a few hundred GByte of data from server_a to server_b (using tar cf - data | ssh server_b 'cd myhome; tar xf -'), essentially saturating the network capacity between those two servers, I noticed network related error messages on the console of server_c - as if server_c is no longer able to send/receive traffic to server_d. After canceling the copy command everything was normal again. I would understand this if the network connection would use a shared resource, for example if server_a and server_c are in one datacenter, server_b and server_d are in another datacenter and both datacenters are connected with a 100 MBit line. But all of the mentioned servers are connected to the same switch and are located in the same IP network. I always thought that a connection between two servers on one switch will not affect any other server connected to the switch. It is also possible that the network related error messages are caused by something else - but I can't risk a network problem for any other system on this switch. Please advise.

    Read the article

  • SQL – Building a High Traffic, Profitable Blog – A Unique Gift on Author’s Birthday

    - by Pinal Dave
    Every July 30th, I like to do something new. It is my birthday and I like to give gifts to everyone this day. Last year, at this time I had written an article A Year Older and 3 SQL Server Books and 3 Video Courses – 33. I had written a total of 3 books by that time and had published total of  3 Pluralsight courses. When I look back the year, I feel that I gave my best to last year. Sine Last July 30th, I have written 6 more books and 5 more video courses. The total is now 9 books and 8 video courses. It seems that I have been producing one new book or course every month since last July. Building a High Traffic, Profitable Blog Out of my 8 courses my favorite course is my latest course at Pluralsight. This course is about how to build a high traffic blog and monetize it. I have been blogging for over 7 years and there have been many hurdles and roadblocks but I have never stopped blogging any single day. There have been many instances when I felt I should just hit delete and remove my entire blog from the web but fortunately I had courage to stand by on my decisions. Well at the end, I kept on fighting through the difficult time and kept on blogging. Every day there was a lesson to learn and every day there was an issue to resolve. I never gave up and kept on building new content. Today after 7 years, when I look back there are many stories to tell. It was impossible to write down the stories so I decided to build a course based on my experience. In this course, I share all the best tricks to build a high traffic, profitable blog. When we talk about profit, people often talk about money but the reality is that profit is much bigger word than money. There are many different ways one can profit from their own blog. In this course, I discuss about all different ways about how you can be profitable by building a high traffic blog. I believe this course is for everybody who aspire to build a website or blog which gives them a profit.  Here are the major topics based out of this course. Introduction Techniques to Engage Blog Readers Social Media – Social Sharing and Social Networking Search Engine Optimization (SEO) Monetizing a Blog Frequently Asked Questions Checklists Personally I believe this is the best gift I can give to all of you my friends. Build a successful high traffic blog and monetize it. Here is the list of the all of my video courses and here is the list of all of the books. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: About Me, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: Blogging

    Read the article

  • ASP.NET MVC Paging/Sorting/Filtering using the MVCContrib Grid and Pager

    - by rajbk
    This post walks you through creating a UI for paging, sorting and filtering a list of data items. It makes use of the excellent MVCContrib Grid and Pager Html UI helpers. A sample project is attached at the bottom. Our UI will eventually look like this. The application will make use of the Northwind database. The top portion of the page has a filter area region. The filter region is enclosed in a form tag. The select lists are wired up with jQuery to auto post back the form. The page has a pager region at the top and bottom of the product list. The product list has a link to display more details about a given product. The column headings are clickable for sorting and an icon shows the sort direction. Strongly Typed View Models The views are written to expect strongly typed objects. We suffix these strongly typed objects with ViewModel since they are designed specifically for passing data down to the view.  The following listing shows the ProductViewModel. This class will be used to hold information about a Product. We use attributes to specify if the property should be hidden and what its heading in the table should be. This metadata will be used by the MvcContrib Grid to render the table. Some of the properties are hidden from the UI ([ScaffoldColumn(false)) but are needed because we will be using those for filtering when writing our LINQ query. public ActionResult Index( string productName, int? supplierID, int? categoryID, GridSortOptions gridSortOptions, int? page) {   var productList = productRepository.GetProductsProjected();   // Set default sort column if (string.IsNullOrWhiteSpace(gridSortOptions.Column)) { gridSortOptions.Column = "ProductID"; }   // Filter on SupplierID if (supplierID.HasValue) { productList = productList.Where(a => a.SupplierID == supplierID); }   // Filter on CategoryID if (categoryID.HasValue) { productList = productList.Where(a => a.CategoryID == categoryID); }   // Filter on ProductName if (!string.IsNullOrWhiteSpace(productName)) { productList = productList.Where(a => a.ProductName.Contains(productName)); }   // Create all filter data and set current values if any // These values will be used to set the state of the select list and textbox // by sending it back to the view. var productFilterViewModel = new ProductFilterViewModel(); productFilterViewModel.SelectedCategoryID = categoryID ?? -1; productFilterViewModel.SelectedSupplierID = supplierID ?? -1; productFilterViewModel.Fill();   // Order and page the product list var productPagedList = productList .OrderBy(gridSortOptions.Column, gridSortOptions.Direction) .AsPagination(page ?? 1, 10);     var productListContainer = new ProductListContainerViewModel { ProductPagedList = productPagedList, ProductFilterViewModel = productFilterViewModel, GridSortOptions = gridSortOptions };   return View(productListContainer); } The following diagram shows the rest of the key ViewModels in our design. We have a container class called ProductListContainerViewModel which has nested classes. The ProductPagedList is of type IPagination<ProductViewModel>. The MvcContrib expects the IPagination<T> interface to determine the page number and page size of the collection we are working with. You convert any IEnumerable<T> into an IPagination<T> by calling the AsPagination extension method in the MvcContrib library. It also creates a paged set of type ProductViewModel. The ProductFilterViewModel class will hold information about the different select lists and the ProductName being searched on. It will also hold state of any previously selected item in the lists and the previous search criteria (you will recall that this type of state information was stored in Viewstate when working with WebForms). With MVC there is no state storage and so all state has to be fetched and passed back to the view. The GridSortOptions is a type defined in the MvcContrib library and is used by the Grid to determine the current column being sorted on and the current sort direction. The following shows the view and partial views used to render our UI. The Index view expects a type ProductListContainerViewModel which we described earlier. <%Html.RenderPartial("SearchFilters", Model.ProductFilterViewModel); %> <% Html.RenderPartial("Pager", Model.ProductPagedList); %> <% Html.RenderPartial("SearchResults", Model); %> <% Html.RenderPartial("Pager", Model.ProductPagedList); %> The View contains a partial view “SearchFilters” and passes it the ProductViewFilterContainer. The SearchFilter uses this Model to render all the search lists and textbox. The partial view “Pager” uses the ProductPageList which implements the interface IPagination. The “Pager” view contains the MvcContrib Pager helper used to render the paging information. This view is repeated twice since we want the pager UI to be available at the top and bottom of the product list. The Pager partial view is located in the Shared directory so that it can be reused across Views. The partial view “SearchResults” uses the ProductListContainer model. This partial view contains the MvcContrib Grid which needs both the ProdctPagedList and GridSortOptions to render itself. The Controller Action An example of a request like this: /Products?productName=test&supplierId=29&categoryId=4. The application receives this GET request and maps it to the Index method of the ProductController. Within the action we create an IQueryable<ProductViewModel> by calling the GetProductsProjected() method. /// <summary> /// This method takes in a filter list, paging/sort options and applies /// them to an IQueryable of type ProductViewModel /// </summary> /// <returns> /// The return object is a container that holds the sorted/paged list, /// state for the fiters and state about the current sorted column /// </returns> public ActionResult Index( string productName, int? supplierID, int? categoryID, GridSortOptions gridSortOptions, int? page) {   var productList = productRepository.GetProductsProjected();   // Set default sort column if (string.IsNullOrWhiteSpace(gridSortOptions.Column)) { gridSortOptions.Column = "ProductID"; }   // Filter on SupplierID if (supplierID.HasValue) { productList.Where(a => a.SupplierID == supplierID); }   // Filter on CategoryID if (categoryID.HasValue) { productList = productList.Where(a => a.CategoryID == categoryID); }   // Filter on ProductName if (!string.IsNullOrWhiteSpace(productName)) { productList = productList.Where(a => a.ProductName.Contains(productName)); }   // Create all filter data and set current values if any // These values will be used to set the state of the select list and textbox // by sending it back to the view. var productFilterViewModel = new ProductFilterViewModel(); productFilterViewModel.SelectedCategoryID = categoryID ?? -1; productFilterViewModel.SelectedSupplierID = supplierID ?? -1; productFilterViewModel.Fill();   // Order and page the product list var productPagedList = productList .OrderBy(gridSortOptions.Column, gridSortOptions.Direction) .AsPagination(page ?? 1, 10);     var productListContainer = new ProductListContainerViewModel { ProductPagedList = productPagedList, ProductFilterViewModel = productFilterViewModel, GridSortOptions = gridSortOptions };   return View(productListContainer); } The supplier, category and productname filters are applied to this IQueryable if any are present in the request. The ProductPagedList class is created by applying a sort order and calling the AsPagination method. Finally the ProductListContainerViewModel class is created and returned to the view. You have seen how to use strongly typed views with the MvcContrib Grid and Pager to render a clean lightweight UI with strongly typed views. You also saw how to use partial views to get data from the strongly typed model passed to it from the parent view. The code also shows you how to use jQuery to auto post back. The sample is attached below. Don’t forget to change your connection string to point to the server containing the Northwind database. NorthwindSales_MvcContrib.zip My name is Kobayashi. I work for Keyser Soze.

    Read the article

  • Sesame Data Browser: filtering, sorting, selecting and linking

    - by Fabrice Marguerie
    I have deferred the post about how Sesame is built in favor of publishing a new update.This new release offers major features such as the ability to quickly filter and sort data, select columns, and create hyperlinks to OData. Filtering, sorting, selecting In order to filter data, you just have to use the filter row, which becomes available when you click on the funnel button: You can then type some text and select an operator: The data grid will be refreshed immediately after you apply a filter. It works in the same way for sorting. Clicking on a column will immediately update the query and refresh the grid.Note that multi-column sorting is possible by using SHIFT-click: Viewing data is not enough. You can also view and copy the query string that returns that data: One more thing you can to shape data is to select which columns are displayed. Simply use the Column Chooser and you'll be done: Again, this will update the data and query string in real time: Linking to Sesame, linking to OData The other main feature of this release is the ability to create hyperlinks to Sesame. That's right, you can ask Sesame to give you a link you can display on a webpage, send in an email, or type in a chat session. You can get a link to a connection: or to a query: You'll note that you can also decide to embed Sesame in a webpage... Here are some sample links created via Sesame: Netflix movies with high ratings, sorted by release year Netflix horror movies from the 21st century Northwind discontinued products with remaining stock Netflix empty connection I'll give more examples in a post to follow. There are many more minor improvements in this release, but I'll let you find out about them by yourself :-)Please try Sesame Data Browser now and let me know what you think! PS: if you use Sesame from the desktop, please use the "Remove this application" command in the context menu of the destkop app and then "Install on desktop" again in your web browser. I'll activate automatic updates with the next release.

    Read the article

  • Search ranking for important keywords has gone down drastically [duplicate]

    - by Vaivhav
    This question already has an answer here: How to diagnose a search engine ranking drop? 5 answers Firstly, we are a small entrepreneurial team of 3 persons and I am more like an amateur webmaster of the company's website as we cannot really afford a technical guy/department right now. A few weeks earlier, our website traffic and rankings for most keywords decreased overnight. I did a lot of reading henceforth and learned about Penguin 2.1 which people said is the reason for the drop. Something like this had never happened before. Now, I have gone through the entire Google webmaster help section. It says there that if a manual penalty is taken against us, we would notice a message in Manual Actions page. So far, we haven't received any notice from Google for web spam. Some SEO guys I contacted said they found spam links in our backlink profile. I do believe I had mistakenly purchased a cheap link/SEO scheme when I was yet very new to SEO. This was more than a year back but since then we have been legitimate. Moreover, how do I find out which is a spam link and which is not? Our content is all original, refreshing and the best you will find in our niche. We also have a blog but on a different domain (wordpress.com) from where we send out anchored links to our business website. Is this a good thing to do? Now, how should we proceed and recover our traffic/rankings. I tried searching in webmasters for a way to reach google and ask them why the traffic has decreased suddenly, but I couldn't find a contact form or something. Can someone please go through our website and help in making things more clear regarding the reason for the drop, along with a solution. Will really appreciate this as I can't get to figure this out and its taking a lot of time. Vaivhav

    Read the article

  • Strange traffic on fresh Ubuntu Server install

    - by Fishy
    I've just installed Ubuntu Server on my home box after becoming partially familiar with it at work and wanting to train up as a Pen Tester. I installed the latest version on a logical partition (the main one contained Win7), and selected none of the extra modules (I think). I installed ngrep and fired it up (along with TCPdump) and immediately saw some strange traffic which I am unable to identify. My pc is sending out UDP packets every couple of seconds to a seemingly random series of IP addresses, all on the same port (47669 - though I did also see it use another port for a while). I watched it do this for about 20 mins, whilst trying to work out why it was doing it. The only other traffic was the odd ARP request for the router and SSDP UPnP broadcasts from the router. Anyone know what this is, or have any advice on how best to find out? Thanks. EDIT: Actually, it's not my box generating the traffic. It's receiving the traffic on that port, from a series of IP addresses, and returning 'port unreachable' messages.

    Read the article

  • The Politics of Junk Filtering

    - by mikef
    If the national postal service, such as the Royal Mail in the UK, were to go through your letters and throw away all the stuff it considered to be junk instead of delivering it to you, you might be rather pleased until you discovered that it took a too liberal decision about what was junk. Catalogs you'd asked for? Junk! Requests from charities? Who needs them! Parcels from competing carriers? Toss them away! The possibility for abuse for an agency that was in a monopolistic position is just too scary to tolerate. After all, the postal service could charge 'consultancy fees' to any sender who wanted to guarantee that his stuff got delivered, or they could even farm this out to other companies. Because Microsoft Outlook is just about the only email client used by the international business community in the west, its' SPAM filter is the final arbiter as to what gets read. My Outlook 2007, set to the default settings, junks all the perfectly innocent email newsletters that I subscribe to. Whereas Google Mail, Yahoo, and LIVE are all pretty accurate in detecting spam, Outlook makes all sorts of silly mistakes. The documentation speaks techno-babble about 'advanced heuristics', but the result boils down to an inaccurate mess. The more that Microsoft fiddles with it, the stickier the mess. To make matters worse, it still lets through obvious spam. The filter is occasionally updated along with other automatic 'security' updates you opt for automatic updates. As an editor for a popular online publication that provides a newsletter service, this is an obvious source of frustration. We follow all the best-practices we know about. We ensure that it is a trivial task to opt out of receiving it. We format the newsletter to the requirements of the Service Providers. We follow up, and resolve, every complaint. As a result, it gets delivered. It is galling to discover that, after all that effort, Outlook then often judges the contents to be junk on a whim, so you don't get to see it. A few days ago, Microsoft published the PST file format specification, under pressure from a European Union interoperability investigation by ECIS (the European Committee for Interoperable Systems). The objective was that other applications could then access existing PST files so as to migrate from existing Outlook installations to other solutions. Joaquín Almunia, the current competition commissioner, should now turn his attention to the more subtle problems of Microsoft Outlook. The Junk problem seems to have come from clumsy implementation of client-side spam filtering rather than from deliberate exploitation of a monopoly on the desktop email client for businesses, but it is a growing problem nonetheless. Cheers, Michael Francis

    Read the article

  • Filtering option list values based on security in UCM

    - by kyle.hatlestad
    Fellow UCM blog writer John Sim recently posted a comment asking about filtering values based on the user's security. I had never dug into that detail before, but thought I would take a look. It ended up being tricker then I originally thought and required a bit of insider knowledge, so I thought I would share. The first step is to create the option list table in Configuration Manager. You want to define the column for the option list value and any other columns desired. You then want to have a column which will store the security attribute to apply to the option list value. In this example, we'll name the column 'dGroupName'. Next step is to create a View based on the new table. For the Internal and Visible column, you can select the option list column name. Then click on the Security tab, uncheck the 'Publish view data' checkbox and select the 'Use standard document security' radio button. Click on the 'Edit Values...' button and add the values for the option list. In the dGroupName field, enter the Security Group (or Account if you use Accounts for security) to apply to that value. Create the custom metadata field and apply the View just created. The next step requires file system access to the server. Open the file [ucm directory]\data\schema\views\[view name].hda in a text editor. Below the line '@Properties LocalData', add the line: schSecurityImplementorColumnMap=dGroupName:dSecurityGroup The 'dGroupName' value designates the column in the table which stores the security value. 'dSecurityGroup' indicates the type of security to check against. It would be 'dDocAccount' if using Accounts. Save the file and restart UCM. Now when a user goes to the check-in page, they will only see the options for which they have read and write privileges to the associated Security Group. And on the Search page, they will see the options for which they have just read access. One thing to note is if a value that a user normally can't view on Check-in or Search is applied to a document, but the document is viewable by the user, the user will be able to see the value on the Content Information screen.

    Read the article

  • Cisco IOS policy route for router originated VPN traffic

    - by Paul
    We have a Cisco IOS router with two DSL connections. One of them is intended for general traffic (ADSL), the other for VPN links (BDSL) and various other traffic. So the default route is the ADSL link, and we have a combination of static routes for the VPN traffic, and policy routes for other traffic types that should go out the BDSL link. For site to site traffic, this is fine, we just static route the public IPs and remote networks out of the BDSL line. The policy based routing works fine for any internal traffic that matches an ACL. The problem is now that there are remote VPN sites originating from dynamic addresses, so we cannot use static routes. The replies to incoming ISAKMP requests are following the default route out of the ADSL (despite there being no crypto map on that interface). I want to route the outgoing VPN traffic out of the BDSL. I have tried adding udp/500 and esp to and from the route-map acl that pushes traffic out of the BDSL line, but it doesn't match, presumably because the route-map happen earlier than the IPSec stuff. Any ideas how I can do this? IOS ver: 12.4.13T.

    Read the article

  • Web filtering (Proxy or DNS) with option for users to ignore the block

    - by Jon Rhoades
    We are struggling with our users visiting infected or "attack" sites and Phising in general. Most of our machines are protected by an Enterprise anti virus and monitoring solution (McAffe ePO) and we try to get people to use Firefox... But no AV is perfect and we have to endure personal machines as well (albeit on their own 'Plague' VLANs) and would like to do something about Phishing as our users seem intent on disclosing their passwords to the world... To complicate matters we don't want to implement a block for many many reasons instead we would like to implement something akin to Firefox's "Reported Scam/Phish/Attack Site" - "Get me out of here" or crucially "Let me in anyway", giving the user a choice to still infect themselves if they feel like it (or look at a site incorrectly blacklisted). The reason we can't just use Firefox is we have a core enterprise App only certified on IE6&7 - thank you Oracle. Is it possible to implement this type of advisory filtering either using a proxy (in our case Squid) or DNS? http://serverfault.com/questions/15801/what-free-options-are-available-for-web-content-filtering http://serverfault.com/questions/47520/open-source-filtering-of-https-traffic Were a good start, but they don't address the advisory aspect of the filtering.

    Read the article

  • Mangling traffic from a Mikrotik Router

    - by TiernanO
    I have a MikroTik powered Router in the house with a couple of internet connections (2 200/10Mb Cable modems and a 100/20Mb VDSL Line). I am using Mangle rules to set routing marks and NAT rules to do some load balancing, and everything seems to be going grand... But it only works for traffic from outside the router... Let me explain: I have 4 GigE ports on the machine, WAN1,2 and 3, and a LAN port named LAN1. All traffic from LAN1 is getting mangled (as it should be) but traffic from the load router itself (proxy traffic, IPv6 tunnels, VPN connections) are not being mangled. They get the first route to 0.0.0.0/0, which in my case is WAN2, and stick with it. So, how do I get traffic from the local router to be mangled? Originally it was proxy traffic that caused the problem, but now with IPv6 and VPN, they are more important to be mangled... last time i enabled IPv6 traffic, all traffic only went though WAN2, and the rest where unused... Any ideas?

    Read the article

  • Remote network traffic not passing through VPN

    - by John Virgolino
    We have the following topology: LAN A LAN B LAN C 10.14.0.0/16 <-VPN-> 10.18.0.0/16 --- SONICWALL <-VPN-> M0N0WALL --- 10.32.0.0/16 Traffic between LAN A and LAN B works perfectly. Traffic between LAN C and LAN B works perfectly. Traffic between LAN A and LAN C, not so much. LAN A's gateway has a route to LAN C that points to the Sonicwall. The Sonicwall has a route to LAN A pointing to the VPN gateway connecting LAN B to LAN A. Tracing packets on the Sonicwall shows the LAN C destined traffic to arrive on the Sonicwall, but it does not forward the traffic, it dies there. Traffic from LAN B gets forwarded. Tracing packets on the Sonicwall while sending traffic from LAN C destined for LAN A shows nothing. This tells me that the M0N0WALL is not forwarding traffic for the 10.14.0.0 network and the Sonicwall is not forwarding from 10.14.0.0. The SA on the Sonicwall terminates on the WAN ZONE and is defined to use an address group that incorporates both the 10.14.0.0 and 10.18.0.0 networks. The M0N0WALL is configured for the 10.18.0.0 network and I have tried with both a static route to 10.14.0.0 and without on the M0N0WALL. I tried manually adding the 10.14.0.0 network to the SA on the M0N0WALL, but that really aggravated it and the SA never came up, so I reverted. I have checked all the firewall rules to make sure nothing is blocked. All of the Sonicwall auto-added rules look right. Specs: Sonicwall TZ200, Enhanced OS M0N0WALL v1.32 I'm at a loss at this point. Any help would be appreciated.

    Read the article

  • ASA access lists and Egress Filtering

    - by Nate
    Hello. I'm trying to learn how to use a cisco ASA firewall, and I don't really know what I'm doing. I'm trying to set up some egress filtering, with the goal of allowing only the minimal amount of traffic out of the network, even if it originated from within the inside interface. In other words, I'm trying to set up dmz_in and inside_in ACLs as if the inside interface is not too trustworthy. I haven't fully grasped all the concepts yet, so I have a few issues. Assume that we're working with three interfaces: inside, outside, and DMZ. Let's say I have a server (X.Y.Z.1) that has to respond to PING, HTTP, SSH, FTP, MySQL, and SMTP. My ACL looks something like this: access-list outside_in extended permit icmp any host X.Y.Z.1 echo-reply access-list outside_in extended permit tcp any host X.Y.Z.1 eq www access-list outside_in extended permit tcp any host X.Y.Z.1 eq ssh access-list outside_in extended permit tcp any host X.Y.Z.1 eq ftp access-list outside_in extended permit tcp any host X.Y.Z.1 eq ftp-data established access-list outside_in extended permit tcp any host X.Y.Z.1 eq 3306 access-list outside_in extended permit tcp any host X.Y.Z.1 eq smtp and I apply it like this: access-group outside_in in interface outside My question is, what can I do for egress filtering? I want to only allow the minimal amount of traffic out. Do I just "reverse" the rules (i.e. the smtp rule becomes access-list inside_out extended permit tcp host X.Y.Z.1 any eq smtp ) and call it a day, or can I further cull my options? What can I safely block? Furthermore, when doing egress filtering, is it enough to apply "inverted" rules to the outside interface, or should I also look into making dmz_in and inside_in acls? I've heard the term "egress filtering" thrown around a lot, but I don't really know what I'm doing. Any pointers towards good resources and reading would also be helpful, most of the ones I've found presume that I know a lot more than I do.

    Read the article

  • Xbox Live Traffic Light Tells You When It’s Game Time

    - by Jason Fitzpatrick
    Why log on to see if your friends are available for a game of Halo 3 when you can glance at this traffic-light-indicator to see if it’s go time? Courtesy of tinker and gamer AndrewF, this fun little hack combines a small traffic light, an Arduino board, and the Xbox live API to provide a real-time indicator of how many of your friends are online and gaming. When the light is red, nobody is available to play. Yellow and green indicate one and several of your friends are available. Hit up the link below to check out the parts list and project code. Xbox Live Traffic Lights [via Hack A Day] HTG Explains: How Antivirus Software Works HTG Explains: Why Deleted Files Can Be Recovered and How You Can Prevent It HTG Explains: What Are the Sys Rq, Scroll Lock, and Pause/Break Keys on My Keyboard?

    Read the article

  • Enhanced Dynamic Filtering

    - by Ricardo Peres
    Remember my last post on dynamic filtering? Well, this time I'm extending the code in order to allow two levels of querying: Match type, represented by the following options: public enum MatchType { StartsWith = 0, Contains = 1 } And word match: public enum WordMatch { AnyWord = 0, AllWords = 1, ExactPhrase = 2 } You can combine the two levels in order to achieve the following combinations: MatchType.StartsWith + WordMatch.AnyWord Matches any record that starts with any of the words specified MatchType.StartsWith + WordMatch.AllWords Not available: does not make sense, throws an exception MatchType.StartsWith + WordMatch.ExactPhrase Matches any record that starts with the exact specified phrase MatchType.Contains + WordMatch.AnyWord Matches any record that contains any of the specified words MatchType.Contains + WordMatch.AllWords Matches any record that contains all of the specified words MatchType.Contains + WordMatch.ExactPhrase Matches any record that contains the exact specified phrase Here is the code: public static IList Search(IQueryable query, Type entityType, String dataTextField, String phrase, MatchType matchType, WordMatch wordMatch, Int32 maxCount) { String [] terms = phrase.Split(' ').Distinct().ToArray(); StringBuilder result = new StringBuilder(); PropertyInfo displayProperty = entityType.GetProperty(dataTextField); IList searchList = null; MethodInfo orderByMethod = typeof(Queryable).GetMethods(BindingFlags.Public | BindingFlags.Static).Where(m = m.Name == "OrderBy").ToArray() [ 0 ].MakeGenericMethod(entityType, displayProperty.PropertyType); MethodInfo takeMethod = typeof(Queryable).GetMethod("Take", BindingFlags.Public | BindingFlags.Static).MakeGenericMethod(entityType); MethodInfo whereMethod = typeof(Queryable).GetMethods(BindingFlags.Public | BindingFlags.Static).Where(m = m.Name == "Where").ToArray() [ 0 ].MakeGenericMethod(entityType); MethodInfo distinctMethod = typeof(Queryable).GetMethods(BindingFlags.Public | BindingFlags.Static).Where(m = m.Name == "Distinct" && m.GetParameters().Length == 1).Single().MakeGenericMethod(entityType); MethodInfo toListMethod = typeof(Enumerable).GetMethod("ToList", BindingFlags.Static | BindingFlags.Public).MakeGenericMethod(entityType); MethodInfo matchMethod = typeof(String).GetMethod ( (matchType == MatchType.StartsWith) ? "StartsWith" : "Contains", new Type [] { typeof(String) } ); MemberExpression member = Expression.MakeMemberAccess ( Expression.Parameter(entityType, "n"), displayProperty ); MethodCallExpression call = null; LambdaExpression where = null; LambdaExpression orderBy = Expression.Lambda ( member, member.Expression as ParameterExpression ); switch (matchType) { case MatchType.StartsWith: switch (wordMatch) { case WordMatch.AnyWord: call = Expression.Call ( member, matchMethod, Expression.Constant(terms [ 0 ]) ); where = Expression.Lambda ( call, member.Expression as ParameterExpression ); for (Int32 i = 1; i ()); where = Expression.Lambda ( Expression.Or ( where.Body, exp ), where.Parameters.ToArray() ); } break; case WordMatch.ExactPhrase: call = Expression.Call ( member, matchMethod, Expression.Constant(phrase) ); where = Expression.Lambda ( call, member.Expression as ParameterExpression ); break; case WordMatch.AllWords: throw (new Exception("The match type StartsWith is not supported with word match AllWords")); } break; case MatchType.Contains: switch (wordMatch) { case WordMatch.AnyWord: call = Expression.Call ( member, matchMethod, Expression.Constant(terms [ 0 ]) ); where = Expression.Lambda ( call, member.Expression as ParameterExpression ); for (Int32 i = 1; i ()); where = Expression.Lambda ( Expression.Or ( where.Body, exp ), where.Parameters.ToArray() ); } break; case WordMatch.ExactPhrase: call = Expression.Call ( member, matchMethod, Expression.Constant(phrase) ); where = Expression.Lambda ( call, member.Expression as ParameterExpression ); break; case WordMatch.AllWords: call = Expression.Call ( member, matchMethod, Expression.Constant(terms [ 0 ]) ); where = Expression.Lambda ( call, member.Expression as ParameterExpression ); for (Int32 i = 1; i ()); where = Expression.Lambda ( Expression.AndAlso ( where.Body, exp ), where.Parameters.ToArray() ); } break; } break; } query = orderByMethod.Invoke(null, new Object [] { query, orderBy }) as IQueryable; query = whereMethod.Invoke(null, new Object [] { query, where }) as IQueryable; if (maxCount != 0) { query = takeMethod.Invoke(null, new Object [] { query, maxCount }) as IQueryable; } searchList = toListMethod.Invoke(null, new Object [] { query }) as IList; return (searchList); } And this is how you'd use it: IQueryable query = ctx.MyEntities; IList list = Search(query, typeof(MyEntity), "Name", "Ricardo Peres", MatchType.Contains, WordMatch.ExactPhrase, 10 /*0 for all*/); SyntaxHighlighter.config.clipboardSwf = 'http://alexgorbatchev.com/pub/sh/2.0.320/scripts/clipboard.swf'; SyntaxHighlighter.brushes.CSharp.aliases = ['c#', 'c-sharp', 'csharp']; SyntaxHighlighter.all();

    Read the article

  • Traffic fall after a server problem

    - by Sébastien
    I have a website from which I analyse the traffic with Google analytics. Day after day the traffic (mainly from Google SE) incresed until I get a problem with my server. For one day the server has been offline and after that I have no longer had as much users as I had before. Now it's like the site is no more referenced on Google index (but when I type "site:mysite.com", I still have all the results). Do you know if this is a normal behaviour and if the traffic will come back as before (the server has had problems two days ago) ?

    Read the article

  • Copy all bridge traffic to a specific interface

    - by Azendale
    I have a bridge/switch set up an a machine that has multiple ports. Occasionally, I have a vm running through virtualbox, and I'll have it use a virtual adapter and then I add the adapter to the bridge. I have heard that some switches can copy all the traffic they see to a specific port on the bridge, usually for network monitoring. I would like to be able to run some windows based network tools. I do not want to run Windows on the actual hardware, because it would be lots of work to duplicate my setup in windows, so I was thinking if I can copy all traffic to a port, I can send it to a VM with windows. How can I set this up? I think this might be ebtables area, but I don't know ebtables well enough to know for sure, and it always seems like (from my understanding of ebtables) ebtables does something with the traffic (drop, accept, etc), but never copies it.

    Read the article

  • Fiddler - A useful free tool for checking Web Services (and web site) traffic

    - by TATWORTH
    Recently I had reason to be very glad that I had Fiddler. I was able to record some web service traffic and identify a problem as Fiddler can record both the call to a web service and response from the web service. By seeing the actual data traffic I was able to resolve a problem found in testing in less time than it has taken me to write this blog entry! This tool is also useful for studying general web site traffic. Fiddler is available from http://www.fiddler2.com/fiddler2/ There are training videos available on the above site.

    Read the article

  • Google Analytics - Traffic Source - Search engine - (Not Provided)

    - by Dharmavir
    I am using Google Analytics, now here when I go to "Traffic Source Overview" under that it shows Keyword as "(Not provided)" which is almost 40% of my traffic source. Now more than 90% of search engine traffic is from Google and still out of that for more than 40% of keywords are "(Not provided)". Can anyone explain me what is going wrong here or how can I get that data? Because that comes as 1st option and is biggest keyword in the list. Will that be some crawler or secure google search?

    Read the article

  • How do I keep a table up to date across 4 db's to be used in SQL Replication Filtering?

    - by Refracted Paladin
    I have a Win Form, Data Entry, application that uses 4 seperate Data Bases. This is an occasionally connected app that uses Merge Replication (SQL 2005) to stay in Sync. This is working just fine. The next hurdle I am trying to tackle is adding Filters to my Publications. Right now we are replicating 70mbs, compressed, to each of our 150 subscribers when, truthfully, they only need a tiny fraction of that. Using Filters I am able to accomplish this(see code below) but I had to make a mapping table in order to do so. This mapping table consists of 3 columns. A PrimaryID(Guid), WorkerName(varchar), and ClientID(int). The problem is I need this table present in all FOUR Databases in order to use it for the filter since, to my knowledge, views or cross-db query's are not allowed in a Filter Statement. What are my options? Seems like I would set it up to be maintained in 1 Database and then use Triggers to keep it updated in the other 3 Databases. In order to be a part of the Filter I have to include that table in the Replication Set so how do I flag it appropriately. Is there a better way, altogether? SELECT <published_columns> FROM [dbo].[tblPlan] WHERE [ClientID] IN (select ClientID from [dbo].[tblWorkerOwnership] where WorkerID = SUSER_SNAME()) Which allows you to chain together Filters, this next one is below the first one so it only pulls from the first's Filtered Set. SELECT <published_columns> FROM [dbo].[tblPlan] INNER JOIN [dbo].[tblHealthAssessmentReview] ON [tblPlan].[PlanID] = [tblHealthAssessmentReview].[PlanID] P.S. - I know how illogical the DB structure sounds. I didn't make it. I inherited it and was then told to make it a "disconnected app."

    Read the article

  • How do I keep a table in Sync across 4 db's to be used in SQL Replication Filtering?

    - by Refracted Paladin
    I have a Win Form, Data Entry, application that uses 4 seperate Data Bases. This is an occasionally connected app that uses Merge Replication (SQL 2005) to stay in Sync. This is working just fine. The next hurdle I am trying to tackle is adding Filters to my Publications. Right now we are replicating 70mbs, compressed, to each of our 150 subscribers when, truthfully, they only need a tiny fraction of that. Using Filters I am able to accomplish this(see code below) but I had to make a mapping table in order to do so. This mapping table consists of 3 columns. A PrimaryID(Guid), WorkerName(varchar), and ClientID(int). The problem is I need this table present in all FOUR Databases in order to use it for the filter since, to my knowledge, views or cross-db query's are not allowed in a Filter Statement. What are my options? Seems like I would set it up to be maintained in 1 Database and then use Triggers to keep it updated in the other 3 Databases. In order to be a part of the Filter I have to include that table in the Replication Set so how do I flag it appropriately. Is there a better way, altogether? SELECT <published_columns> FROM [dbo].[tblPlan] WHERE [ClientID] IN (select ClientID from [dbo].[tblWorkerOwnership] where WorkerID = SUSER_SNAME()) Which allows you to chain together Filters, this next one is below the first one so it only pulls from the first's Filtered Set. SELECT <published_columns> FROM [dbo].[tblPlan] INNER JOIN [dbo].[tblHealthAssessmentReview] ON [tblPlan].[PlanID] = [tblHealthAssessmentReview].[PlanID] P.S. - I know how illogical the DB structure sounds. I didn't make it. I inherited it and was then told to make it a "disconnected app."

    Read the article

  • Pre-filtering and shaping OData feeds using WCF Data Services and the Entity Framework - Part 2

    - by rajbk
    In the previous post, you saw how to create an OData feed and pre-filter the data. In this post, we will see how to shape the data. A sample project is attached at the bottom of this post. Pre-filtering and shaping OData feeds using WCF Data Services and the Entity Framework - Part 1 Shaping the feed The Product feed we created earlier returns too much information about our products. Let’s change this so that only the following properties are returned – ProductID, ProductName, QuantityPerUnit, UnitPrice, UnitsInStock. We also want to return only Products that are not discontinued.  Splitting the Entity To shape our data according to the requirements above, we are going to split our Product Entity into two and expose one through the feed. The exposed entity will contain only the properties listed above. We will use the other Entity in our Query Interceptor to pre-filter the data so that discontinued products are not returned. Go to the design surface for the Entity Model and make a copy of the Product entity. A “Product1” Entity gets created.   Rename Product1 to ProductDetail. Right click on the Product entity and select “Add Association” Make a one to one association between Product and ProductDetails.   Keep only the properties we wish to expose on the Product entity and delete all other properties on it (see diagram below). You delete a property on an Entity by right clicking on the property and selecting “delete”. Keep the ProductID on the ProductDetail. Delete any other property on the ProductDetail entity that is already present in the Product entity. Your design surface should look like below:    Mapping Entity to Database Tables Right click on “ProductDetail” and go to “Table Mapping”   Add a mapping to the “Products” table in the Mapping Details.   After mapping ProductDetail, you should see the following.   Add a referential constraint. Lets add a referential constraint which is similar to a referential integrity constraint in SQL. Double click on the Association between the Entities and add the constraint with “Principal” set to “Product”. Let us review what we did so far. We made a copy of the Product entity and called it ProductDetail We created a one to one association between these entities Excluding the ProductID, we made sure properties were not duplicated between these entities  We added a ProductDetail entity to Products table mapping (Entity to Database). We added a referential constraint between the entities. Lets build our project. We get the following error: ”'NortwindODataFeed.Product' does not contain a definition for 'Discontinued' and no extension method 'Discontinued' accepting a first argument of type 'NortwindODataFeed.Product' could be found …" The reason for this error is because our Product Entity no longer has a “Discontinued” property. We “moved” it to the ProductDetail entity since we want our Product Entity to contain only properties that will be exposed by our feed. Since we have a one to one association between the entities, we can easily rewrite our Query Interceptor like so: [QueryInterceptor("Products")] public Expression<Func<Product, bool>> OnReadProducts() { return o => o.ProductDetail.Discontinued == false; } Similarly, all “hidden” properties of the Product table are available to us internally (through the ProductDetail Entity) for any additional logic we wish to implement. Compile the project and view the feed. We see that the feed returns only the properties that were part of the requirement.   To see the data in JSON format, you have to create a request with the following request header Accept: application/json, text/javascript, */* (easy to do in jQuery) The result should look like this: { "d" : { "results": [ { "__metadata": { "uri": "http://localhost.:2576/DataService.svc/Products(1)", "type": "NorthwindModel.Product" }, "ProductID": 1, "ProductName": "Chai", "QuantityPerUnit": "10 boxes x 20 bags", "UnitPrice": "18.0000", "UnitsInStock": 39 }, { "__metadata": { "uri": "http://localhost.:2576/DataService.svc/Products(2)", "type": "NorthwindModel.Product" }, "ProductID": 2, "ProductName": "Chang", "QuantityPerUnit": "24 - 12 oz bottles", "UnitPrice": "19.0000", "UnitsInStock": 17 }, { ... ... If anyone has the $format operation working, please post a comment. It was not working for me at the time of writing this.  We have successfully pre-filtered our data to expose only products that have not been discontinued and shaped our data so that only certain properties of the Entity are exposed. Note that there are several other ways you could implement this like creating a QueryView, Stored Procedure or DefiningQuery. You have seen how easy it is to create an OData feed, shape the data and pre-filter it by hardly writing any code of your own. For more details on OData, Google it with your favorite search engine :-) Also check out the one of the most passionate persons I have ever met, Pablo Castro – the Architect of Aristoria WCF Data Services. Watch his MIX 2010 presentation titled “OData: There's a Feed for That” here. Download Sample Project for VS 2010 RTM NortwindODataFeed.zip

    Read the article

  • What comment-spam filtering service works?

    - by Charles Stewart
    From an answer I gave to another question: There are comment filtering services out there that can analyse comments in a manner similar to mail spam filters (all links to the client API page, organised from simplest API to most complex): Steve Kemp (again) has an xml-rpc-based comment filter: it's how Debian filters comments, and the code is free software, meaning you can run your own comment filtering server if you like; There's Akismet, which is from the WordPress universe; There's Mollom, which has an impressive list of users. It's closed source; it might say "not sure" about comments, intended to suggest offering a captcha to check the user. For myself, I'm happy with offline by-hand filtering, but I suggested Kemp's service to someone who had an underwhelming experience with Mollom, and I'd like to pass on more reports from anyone who has tried these or other services.

    Read the article

  • Should my blog be directly on my website?

    - by steve
    I have my newly launched website at www.slicify.com (redirects to a secure subdomain). I currently have a separate blog on WordPress: slicify.wordpress.com for a couple of reasons: I don't really want to mix my site code (it's a complex ecommerce site written in ASP.Net) with blog code, for ease of maintenance etc. WordPress is already great at blogs - seems silly to reinvent the wheel by trying to integrate blog functionality into my site However is keeping my blog on a separate domain going to hurt me in terms of PageRank or traffic? FWIW: while it's early days, I can see from Google Analytics that a good deal of referral traffic is already coming from my WordPress site to my main site, so at least that seems to be drawing potential users in.

    Read the article

  • Digg alternatives for blog and unpopular users? [closed]

    - by Wladimir Ivanov
    all. I'm struggling to build an [B]audience[/B] for [B]electronic blog[/B] . The blog is relatively new and has around 30 pages. The unique visitors I get are approximately [B]120 - daily[/B]. I know about directories, rss, comments and guest blogging, but is there other more effective strategy to build some quality audience? As I see nowadays there aren't enough materials in my country about this. What about digg and reddit? Everytime I post some link there: no traffic comes to me. Can you suggest me other digg/reddit/stumbleupon tactics to get followers or there are similar sites which would tend to give me some serious traffic. Can you suggest sites appropriate for linking to music blog? Best regards.

    Read the article

< Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >