Search Results

Search found 8460 results on 339 pages for 'links'.

Page 54/339 | < Previous Page | 50 51 52 53 54 55 56 57 58 59 60 61  | Next Page >

  • How to delete all your old website data from the internet?

    - by Akky Awesøme
    I had my website on rohbits.com but for some reasons I had to delete it and recreate it with this URL wwww.rohbits.com/blog. My problem is that the old links are still visible on google search and when people click on those links, they land on a 404 Error page of the hosting company. I want to either delete all the previous data from the search engines or have an 404 Error page of my own so that I can tell my visitors where the actual website is. I have already redirected all the traffic which comes to rohbits.com to www.rohbits.com/blog but when they click on the expired links, they get this error page. One sample expired link is this one: http://rohbits.com/wordpress-tricks.

    Read the article

  • Web App for storing and organize programming information?

    - by Fabzter
    So, I've found myself, after several years of coding (I consider myself a coder, rather than a programmer) full of links and loose snippets and coding tips, all dispersed across the web. In such way it is barely usable, even when every bit is important or interesting. I thought of simply storing the links in delicious or something alike, but it's not really the links I want to keep, I just need the succinth info. So I was thinking to use some web app, something like a wiki, maybe much more simple, so I could access it though my mobile if I need it. I could code it, but as I stated it before, I'm more of a code monkey, and I'm sure my solution would be far from decent... Can anyone give me recommendations on this?

    Read the article

  • Another website is mirroring and ranks above my site in search results

    - by Marlboro Goodluck
    There is a site of ill-repute known as thedirty which has completely mirrored my site and now has links appearing on Google at the #1 spot using my content. I checked my log files and noticed that this site has been crawling mine for sometime, and also has 10,000 links from their site to mine. I have blocked user access which is referred from this site and reported them as web spam to Google already. I also disavowed the domain. How are they getting top links in Google (even overtaking mine) for such nefarious tactics? What are the steps to completely eliminating an issue such as this?

    Read the article

  • best cloud storage + rsnapshot

    - by humbledude
    I’ve started using rsnapshot as my backup system for home PC. I really like the idea of hard links and how they are handled. But can’t find best workflow. Currently I keep my snapshots on the same partition and let’s say, copy newest one to a pendrive at the end of the week. Cloud storage is what I’m looking for. As of rsnapshot, Dropbox doesn’t fit my needs. More over there is no way to make it respect hard links — all snapshots are treated as a full snapshot. Renting a server is pretty expensive so my question is, are there better alternatives for backup in the cloud? I would like to benefit from hard links and send only incremental backups, just like in my local host.

    Read the article

  • Another website is mirroring my site

    - by Marlboro Goodluck
    Question for you all. There is a site of ill repute known as thedirty which has completely mirrored my site and now has links appearing on Google at the #1 spot using my content. I checked my log file and noticed that this site has been crawling mine from sometime, and also has 10k links from their site to mine. I have blocked user access which is referred from this site and reported them as web spam to Google already. I also disavowed the domain. How are they getting top links in Google (even overtaking mine) for such nefarious tactics? What are the steps to completely eliminating an issue such as this?

    Read the article

  • "Mega Menus" for SEO [duplicate]

    - by Thought Space Designs
    This question already has an answer here: How do I handle having to many links on a webpage because of my menu 4 answers I'm using the term "Mega Menus" loosely here. I'm redesigning my WordPress site (it's going to be responsive), and as part of the redesign, I was debating incorporating some sort of descriptive menu setup. For example, normal navigation drop down menus come in the form of unordered lists of links like so: <nav> <ul> <li> <a href="#">Link1</a> </li> <li> <a href="#">Link2</a> </li> <li> <a href="#">Link3</a> <ul> <li> <a href="#">Sub Link1</a> </li> <li> <a href="#">Sub Link2</a> </li> <li> <a href="#">Sub Link3</a> </li> </ul> </li> <li> <a href="#">Link4</a> </li> </ul> </nav> What I'm looking to do is build my drop down menus with more information than your standard menu. For example, I have a top level link named "Team", and under that link, I want to make a large drop down that contains head shots, headers (in the form of styled p tags) and brief (<100 words) descriptions of each team member (only 2 currently). I want to accompany this with a "Read More" link that takes you to their actual team page. This is just one example, of course, and the other top level links would also have descriptive drop downs in the same fashion. On mobile, I was planning on hiding the "mega menu", and delivering a standard unordered list of links. Here's what I was thinking for overall structure and syntax: <nav> <ul> <li> <a href="#">Home</a> </li> <li> <a href="#">About</a> </li> <li> <a href="#">Team</a> <ul> <!-- DESKTOP --> <li class="mega-menu row"> <a class="col-sm-6" href="#"> <div class="row"> <div class="col-sm-4"> <img src="#" alt="Team Member 1" /> </div> <div class="col-sm-8"> <p class="header">Team Member 1</p> <p>Short description goes here.</p> </div> </div> </a> <a class="col-sm-6" href="#"> <!-- OTHER TEAM MEMBER INFO --> </a> </li> <!-- END DESKTOP --> <!-- MOBILE --> <li> <a href="#">Team Member 1</a> </li> <li> <a href="#">Team Member 2</a> </li> <!-- END MOBILE --> </ul> </li> <li> <a href="#">Contact</a> </li> </ul> </nav> Can anybody think of any potential SEO ramifications of doing this? I'm not going to be loading these menus full of links, so it shouldn't hurt page rank, but what are the effects of having a good bit of text and maybe even forms within nav elements? Is there such a thing as overloading nav with HTML? EDIT: Here's an example of what the menu would look like rendered on desktop. I'm currently hovering the "Team" menu, but you can't see because my mouse went away when I took the screenshot. EDIT 2: This question is not a duplicate. I'm not going to have "too many" links in my menus. I'm wondering how having images and text inside of header navigation will affect my menus. Also, I don't just want "yes, this is bad" answers. Please cite your sources and be specific with reasoning.

    Read the article

  • .NET HTML Sanitation for rich HTML Input

    - by Rick Strahl
    Recently I was working on updating a legacy application to MVC 4 that included free form text input. When I set up the new site my initial approach was to not allow any rich HTML input, only simple text formatting that would respect a few simple HTML commands for bold, lists etc. and automatically handles line break processing for new lines and paragraphs. This is typical for what I do with most multi-line text input in my apps and it works very well with very little development effort involved. Then the client sprung another note: Oh by the way we have a bunch of customers (real estate agents) who need to post complete HTML documents. Oh uh! There goes the simple theory. After some discussion and pleading on my part (<snicker>) to try and avoid this type of raw HTML input because of potential XSS issues, the client decided to go ahead and allow raw HTML input anyway. There has been lots of discussions on this subject on StackOverFlow (and here and here) but to after reading through some of the solutions I didn't really find anything that would work even closely for what I needed. Specifically we need to be able to allow just about any HTML markup, with the exception of script code. Remote CSS and Images need to be loaded, links need to work and so. While the 'legit' HTML posted by these agents is basic in nature it does span most of the full gamut of HTML (4). Most of the solutions XSS prevention/sanitizer solutions I found were way to aggressive and rendered the posted output unusable mostly because they tend to strip any externally loaded content. In short I needed a custom solution. I thought the best solution to this would be to use an HTML parser - in this case the Html Agility Pack - and then to run through all the HTML markup provided and remove any of the blacklisted tags and a number of attributes that are prone to JavaScript injection. There's much discussion on whether to use blacklists vs. whitelists in the discussions mentioned above, but I found that whitelists can make sense in simple scenarios where you might allow manual HTML input, but when you need to allow a larger array of HTML functionality a blacklist is probably easier to manage as the vast majority of elements and attributes could be allowed. Also white listing gets a bit more complex with HTML5 and the new proliferation of new HTML tags and most new tags generally don't affect XSS issues directly. Pure whitelisting based on elements and attributes also doesn't capture many edge cases (see some of the XSS cheat sheets listed below) so even with a white list, custom logic is still required to handle many of those edge cases. The Microsoft Web Protection Library (AntiXSS) My first thought was to check out the Microsoft AntiXSS library. Microsoft has an HTML Encoding and Sanitation library in the Microsoft Web Protection Library (formerly AntiXSS Library) on CodePlex, which provides stricter functions for whitelist encoding and sanitation. Initially I thought the Sanitation class and its static members would do the trick for me,but I found that this library is way too restrictive for my needs. Specifically the Sanitation class strips out images and links which rendered the full HTML from our real estate clients completely useless. I didn't spend much time with it, but apparently I'm not alone if feeling this library is not really useful without some way to configure operation. To give you an example of what didn't work for me with the library here's a small and simple HTML fragment that includes script, img and anchor tags. I would expect the script to be stripped and everything else to be left intact. Here's the original HTML:var value = "<b>Here</b> <script>alert('hello')</script> we go. Visit the " + "<a href='http://west-wind.com'>West Wind</a> site. " + "<img src='http://west-wind.com/images/new.gif' /> " ; and the code to sanitize it with the AntiXSS Sanitize class:@Html.Raw(Microsoft.Security.Application.Sanitizer.GetSafeHtmlFragment(value)) This produced a not so useful sanitized string: Here we go. Visit the <a>West Wind</a> site. While it removed the <script> tag (good) it also removed the href from the link and the image tag altogether (bad). In some situations this might be useful, but for most tasks I doubt this is the desired behavior. While links can contain javascript: references and images can 'broadcast' information to a server, without configuration to tell the library what to restrict this becomes useless to me. I couldn't find any way to customize the white list, nor is there code available in this 'open source' library on CodePlex. Using Html Agility Pack for HTML Parsing The WPL library wasn't going to cut it. After doing a bit of research I decided the best approach for a custom solution would be to use an HTML parser and inspect the HTML fragment/document I'm trying to import. I've used the HTML Agility Pack before for a number of apps where I needed an HTML parser without requiring an instance of a full browser like the Internet Explorer Application object which is inadequate in Web apps. In case you haven't checked out the Html Agility Pack before, it's a powerful HTML parser library that you can use from your .NET code. It provides a simple, parsable HTML DOM model to full HTML documents or HTML fragments that let you walk through each of the elements in your document. If you've used the HTML or XML DOM in a browser before you'll feel right at home with the Agility Pack. Blacklist based HTML Parsing to strip XSS Code For my purposes of HTML sanitation, the process involved is to walk the HTML document one element at a time and then check each element and attribute against a blacklist. There's quite a bit of argument of what's better: A whitelist of allowed items or a blacklist of denied items. While whitelists tend to be more secure, they also require a lot more configuration. In the case of HTML5 a whitelist could be very extensive. For what I need, I only want to ensure that no JavaScript is executed, so a blacklist includes the obvious <script> tag plus any tag that allows loading of external content including <iframe>, <object>, <embed> and <link> etc. <form>  is also excluded to avoid posting content to a different location. I also disallow <head> and <meta> tags in particular for my case, since I'm only allowing posting of HTML fragments. There is also some internal logic to exclude some attributes or attributes that include references to JavaScript or CSS expressions. The default tag blacklist reflects my use case, but is customizable and can be added to. Here's my HtmlSanitizer implementation:using System.Collections.Generic; using System.IO; using System.Xml; using HtmlAgilityPack; namespace Westwind.Web.Utilities { public class HtmlSanitizer { public HashSet<string> BlackList = new HashSet<string>() { { "script" }, { "iframe" }, { "form" }, { "object" }, { "embed" }, { "link" }, { "head" }, { "meta" } }; /// <summary> /// Cleans up an HTML string and removes HTML tags in blacklist /// </summary> /// <param name="html"></param> /// <returns></returns> public static string SanitizeHtml(string html, params string[] blackList) { var sanitizer = new HtmlSanitizer(); if (blackList != null && blackList.Length > 0) { sanitizer.BlackList.Clear(); foreach (string item in blackList) sanitizer.BlackList.Add(item); } return sanitizer.Sanitize(html); } /// <summary> /// Cleans up an HTML string by removing elements /// on the blacklist and all elements that start /// with onXXX . /// </summary> /// <param name="html"></param> /// <returns></returns> public string Sanitize(string html) { var doc = new HtmlDocument(); doc.LoadHtml(html); SanitizeHtmlNode(doc.DocumentNode); //return doc.DocumentNode.WriteTo(); string output = null; // Use an XmlTextWriter to create self-closing tags using (StringWriter sw = new StringWriter()) { XmlWriter writer = new XmlTextWriter(sw); doc.DocumentNode.WriteTo(writer); output = sw.ToString(); // strip off XML doc header if (!string.IsNullOrEmpty(output)) { int at = output.IndexOf("?>"); output = output.Substring(at + 2); } writer.Close(); } doc = null; return output; } private void SanitizeHtmlNode(HtmlNode node) { if (node.NodeType == HtmlNodeType.Element) { // check for blacklist items and remove if (BlackList.Contains(node.Name)) { node.Remove(); return; } // remove CSS Expressions and embedded script links if (node.Name == "style") { if (string.IsNullOrEmpty(node.InnerText)) { if (node.InnerHtml.Contains("expression") || node.InnerHtml.Contains("javascript:")) node.ParentNode.RemoveChild(node); } } // remove script attributes if (node.HasAttributes) { for (int i = node.Attributes.Count - 1; i >= 0; i--) { HtmlAttribute currentAttribute = node.Attributes[i]; var attr = currentAttribute.Name.ToLower(); var val = currentAttribute.Value.ToLower(); span style="background: white; color: green">// remove event handlers if (attr.StartsWith("on")) node.Attributes.Remove(currentAttribute); // remove script links else if ( //(attr == "href" || attr== "src" || attr == "dynsrc" || attr == "lowsrc") && val != null && val.Contains("javascript:")) node.Attributes.Remove(currentAttribute); // Remove CSS Expressions else if (attr == "style" && val != null && val.Contains("expression") || val.Contains("javascript:") || val.Contains("vbscript:")) node.Attributes.Remove(currentAttribute); } } } // Look through child nodes recursively if (node.HasChildNodes) { for (int i = node.ChildNodes.Count - 1; i >= 0; i--) { SanitizeHtmlNode(node.ChildNodes[i]); } } } } } Please note: Use this as a starting point only for your own parsing and review the code for your specific use case! If your needs are less lenient than mine were you can you can make this much stricter by not allowing src and href attributes or CSS links if your HTML doesn't allow it. You can also check links for external URLs and disallow those - lots of options.  The code is simple enough to make it easy to extend to fit your use cases more specifically. It's also quite easy to make this code work using a WhiteList approach if you want to go that route. The code above is semi-generic for allowing full featured HTML fragments that only disallow script related content. The Sanitize method walks through each node of the document and then recursively drills into all of its children until the entire document has been traversed. Note that the code here uses an XmlTextWriter to write output - this is done to preserve XHTML style self-closing tags which are otherwise left as non-self-closing tags. The sanitizer code scans for blacklist elements and removes those elements not allowed. Note that the blacklist is configurable either in the instance class as a property or in the static method via the string parameter list. Additionally the code goes through each element's attributes and looks for a host of rules gleaned from some of the XSS cheat sheets listed at the end of the post. Clearly there are a lot more XSS vulnerabilities, but a lot of them apply to ancient browsers (IE6 and versions of Netscape) - many of these glaring holes (like CSS expressions - WTF IE?) have been removed in modern browsers. What a Pain To be honest this is NOT a piece of code that I wanted to write. I think building anything related to XSS is better left to people who have far more knowledge of the topic than I do. Unfortunately, I was unable to find a tool that worked even closely for me, or even provided a working base. For the project I was working on I had no choice and I'm sharing the code here merely as a base line to start with and potentially expand on for specific needs. It's sad that Microsoft Web Protection Library is currently such a train wreck - this is really something that should come from Microsoft as the systems vendor or possibly a third party that provides security tools. Luckily for my application we are dealing with a authenticated and validated users so the user base is fairly well known, and relatively small - this is not a wide open Internet application that's directly public facing. As I mentioned earlier in the post, if I had my way I would simply not allow this type of raw HTML input in the first place, and instead rely on a more controlled HTML input mechanism like MarkDown or even a good HTML Edit control that can provide some limits on what types of input are allowed. Alas in this case I was overridden and we had to go forward and allow *any* raw HTML posted. Sometimes I really feel sad that it's come this far - how many good applications and tools have been thwarted by fear of XSS (or worse) attacks? So many things that could be done *if* we had a more secure browser experience and didn't have to deal with every little script twerp trying to hack into Web pages and obscure browser bugs. So much time wasted building secure apps, so much time wasted by others trying to hack apps… We're a funny species - no other species manages to waste as much time, effort and resources as we humans do :-) Resources Code on GitHub Html Agility Pack XSS Cheat Sheet XSS Prevention Cheat Sheet Microsoft Web Protection Library (AntiXss) StackOverflow Links: http://stackoverflow.com/questions/341872/html-sanitizer-for-net http://blog.stackoverflow.com/2008/06/safe-html-and-xss/ http://code.google.com/p/subsonicforums/source/browse/trunk/SubSonic.Forums.Data/HtmlScrubber.cs?r=61© Rick Strahl, West Wind Technologies, 2005-2012Posted in Security  HTML  ASP.NET  JavaScript   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Search Alternative Search Engines from within Bing’s Search Page

    - by Asian Angel
    So you love using Bing Search but may still be curious to see what another search engine will provide if used. Now you can search using another search engine from within the Bing Search page and enjoy numbered results using two simple user scripts. Note: These user scripts may also be added to other browsers as well (i.e. Iron, Opera, etc.). Before Bing Search does nicely on searches but what if you would like to try the same search with another search engine? Having to manually open a new tab, navigate to the appropriate website, and then start a new search is not too convenient. Another possible frustration for some people may be knowing just how many search results that they have looked through. Well, both of these small problems are easy to fix with two wonderful user scripts. Installing the Scripts The first script that we installed (you may do either one first) was for adding alternative search engine links. Click “Install” to get started… Note: For our example we had the Greasemonkey extension installed. When the confirmation window pops up click on “Install” to finish adding the user script to Firefox. Repeating the same procedure as above add your second script to Firefox. Confirm the second user script installation and you are ready to enjoy nicer Bing Search results. After As you can see there are two small unobtrusive differences in our search results. The alternative search engine links are conveniently located at the top of the page and now you can easily know just how many search results that you have looked through. The results when we decided to try the search in a transfer over to Yahoo. Our search transferred to Ask Search. The alternative search links can be very helpful if Bing is not providing the kind of search results that you are hoping for. Still going very nicely past the 100 mark… Conclusion If you have been wanting a small booster to searching with Bing then these two scripts will get you on your way. Using Opera Browser? See our how-to for adding user scripts to Opera here. Links Install the Bing (Alternate Search Engine Links) User Script Install the Bing Numbered Search Results User Script Download the Greasemonkey extension for Firefox (Mozilla Add-ons) Download the Stylish extension for Firefox (Mozilla Add-ons) Similar Articles Productive Geek Tips Organize Your Firefox Search Engines Into FoldersFix for Slow "Instant Search" In Outlook 2007Gain Access to a Search Box in Google ChromeManage Web Searches In SafariModify Firefox’s Search Bar Behavior with SearchLoad Options TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Heaven & Hell Finder Icon Using TrueCrypt to Secure Your Data Quickly Schedule Meetings With NeedtoMeet Share Flickr Photos On Facebook Automatically Are You Blocked On Gtalk? Find out Discover Latest Android Apps On AppBrain

    Read the article

  • Silverlight Cream for March 30, 2010 -- #825

    - by Dave Campbell
    In this Issue: Jeremy Likness, Tim Greenfield, Tim Heuer, ondrejsv, XAML Ninja, Nikhil Kothari, Sergey Barskiy, Shawn Oster, smartyP, Christian Schormann(-2-), and John Papa And Glenn Block. Shoutouts: Victor Gaudioso produced a RefCard for DZone: Getting Started with Silverlight and Expression Blend Way to go Victor... it looks great! Gavin Wignall announced Metia launch FourSquare and Bing maps mash up – called Near.me Cheryl Simmons talks about VS2010 and the design surface: Changing Templates with the Silverlight Designer (and seeing the changes immediately) Michael S. Scherotter posted that New York Times Silverlight Kit Updated for Windows Phone 7 Series Jaime Rodriguez posted about 2 free chapters in his new book (with Yochay Kiriaty): A Journey Into Silverlight On Windows Phone -Via Learning WIndows PHone Programming Did you know there was "MSDN Radio"?? Tim Heuer posted follow-up answers to this morning's show: MSDN Radio follow-up answers: Prism for Silverlight, DomainServices and relationships Michael Klucher posted a great set of links for WP7 game development this morning: Great Game Development Tutorials for Windows Phone Zhiming Xue has 3 pages of synopsis and links for everything Windows Phone at MIX. This is the 1st, but at the top of the pages are links to the other two: Windows Phone 7 Content From MIX10 – Part I From SilverlightCream.com: Using WriteableBitmap to Simplify Animations with Clones Jeremy Likness takes a break from his LOB posts to demonstrate a page flip animation using WriteableBitmap to simplify the animation using clones. SAX-like Xml parsing Want some experience or fun with Rx? Tim Greenfield has a post up on building an observable XmlReader. nstalling Silverlight applications without the browser involved Last night I blogged Mike Taulty's take on the "Silent Install" for an OOB app, tonight, I'm posting Tim Heuer's insight on the topic. How to: Create computed/custom properties for sample data in Blend/Sketchflow ondrejsv posted an example of digging into the files that control the sample data for Blend to get what you really want. PathListBox Adventures – radial layout Check out the radial layout XAML Ninja did using the PathListBox ... and all code available. RIA Services and Validation Nikhil Kothari has a great (duh!) post up that follows his Silverlight TV on the same subject: RIA Services and validation... lots of good external links also. Windows Phone 7 Application with OData Sergey Barskiy did an OData to WP7 app by using the feed from MIX10. You can see a list of sessions, and click on one to see details. Getting Blur And DropShadow to work in the Windows Phone Emulator Shawn Oster responds to some forum questions about Blur and DropShadow effects not showing up in the WP7 emulator, and gives the code trick we have to do for now. Metro Icons for Windows Phone 7 We all got the other icon set for WP7 from MSDN, but smartyP pulled the Metro Icons from the PPT deck of the MIX10 presentations... good job! Fonts in SketchFlow Christian Schormann talks about fonts in Sketchflow, where they live on your machine, and how you can use them. Blend 4: About Path Layout, Part III Christian Schormann also has Part III of his epic tutorial up on Path Layout and Blend. This one is on dynamic resizing layouts, and he has links back to the other two if you missed them... or you can find them with a search at SilverlightCream... :) Simple ViewModel Locator for MVVM: The Patients Have Left the Asylum John Papa And Glenn Block teamed up to solve the View First model only without the maintenance involved with the ViewModel locator by using MEF. It only took these guys and hour... sigh... :) Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Python: How to read huge text file into memory

    - by asmaier
    I'm using Python 2.6 on a Mac Mini with 1GB RAM. I want to read in a huge text file $ ls -l links.csv; file links.csv; tail links.csv -rw-r--r-- 1 user user 469904280 30 Nov 22:42 links.csv links.csv: ASCII text, with CRLF line terminators 4757187,59883 4757187,99822 4757187,66546 4757187,638452 4757187,4627959 4757187,312826 4757187,6143 4757187,6141 4757187,3081726 4757187,58197 So each line in the file consists of a tuple of two comma separated integer values. I want to read in the whole file and sort it according to the second column. I know, that I could do the sorting without reading the whole file into memory. But I thought for a file of 500MB I should still be able to do it in memory since I have 1GB available. However when I try to read in the file, Python seems to allocate a lot more memory than is needed by the file on disk. So even with 1GB of RAM I'm not able to read in the 500MB file into memory. My Python code for reading the file and printing some information about the memory consumption is: #!/usr/bin/python # -*- coding: utf-8 -*- import sys infile=open("links.csv", "r") edges=[] count=0 #count the total number of lines in the file for line in infile: count=count+1 total=count print "Total number of lines: ",total infile.seek(0) count=0 for line in infile: edge=tuple(map(int,line.strip().split(","))) edges.append(edge) count=count+1 # for every million lines print memory consumption if count%1000000==0: print "Position: ", edge print "Read ",float(count)/float(total)*100,"%." mem=sys.getsizeof(edges) for edge in edges: mem=mem+sys.getsizeof(edge) for node in edge: mem=mem+sys.getsizeof(node) print "Memory (Bytes): ", mem The output I got was: Total number of lines: 30609720 Position: (9745, 2994) Read 3.26693612356 %. Memory (Bytes): 64348736 Position: (38857, 103574) Read 6.53387224712 %. Memory (Bytes): 128816320 Position: (83609, 63498) Read 9.80080837067 %. Memory (Bytes): 192553000 Position: (139692, 1078610) Read 13.0677444942 %. Memory (Bytes): 257873392 Position: (205067, 153705) Read 16.3346806178 %. Memory (Bytes): 320107588 Position: (283371, 253064) Read 19.6016167413 %. Memory (Bytes): 385448716 Position: (354601, 377328) Read 22.8685528649 %. Memory (Bytes): 448629828 Position: (441109, 3024112) Read 26.1354889885 %. Memory (Bytes): 512208580 Already after reading only 25% of the 500MB file, Python consumes 500MB. So it seem that storing the content of the file as a list of tuples of ints is not very memory efficient. Is there a better way to do it, so that I can read in my 500MB file into my 1GB of memory?

    Read the article

  • How to fix IE ClearType + jQuery opacity problem in this script?

    - by Justine
    Hello, I'm having a rather common problem (or so it seems, after some googling around...) with IE messing both bold text AND transparent pngs while animating opacity using jQuery. You can view the sample here: http://dev.gentlecode.net/dotme/index-sample.html (only occurs in IE, obviously) I've seen some blog posts saying the fix is to remove the filter attribute but I'm not sure how to apply it to the script I'm using since I got it from a tutorial and am still learning jQuery... The script goes as follows: $('ul.nav').each(function() { var $links = $(this).find('a'), panelIds = $links.map(function() { return this.hash; }).get().join(","), $panels = $(panelIds), $panelWrapper = $panels.filter(':first').parent(), delay = 500; $panels.hide(); $links.click(function() { var $link = $(this), link = (this); if ($link.is('.current')) { return; } $links.removeClass('current'); $link.addClass('current'); $panels.animate({ opacity : 0 }, delay); $panelWrapper.animate({ height: 0 }, delay, function() { var height = $panels.hide().filter(link.hash).show().css('opacity', 1).outerHeight(); $panelWrapper.animate({ height: height }, delay); }); return false; }); var showtab = window.location.hash ? '[hash=' + window.location.hash + ']' : ':first'; $links.filter(showtab).click(); }); I would appreciate if someone could go over it and show me how to fix the opacity issue. Will the filter method also fix the trouble I'm having with transparent pngs having pixelated ugly borders like the bold type as well? Thanks in advance for all help!

    Read the article

  • jquery to have animated popup

    - by Anish Mohan
    Hi, In my main page, I need to have a main links dispayed...and on mouse hover of each main link it shud display a layer (something like modal popup but.smooth one)...and in the layer user shud be able to select other links. It shud allow the user to move smoothly between the main links. can i use jquery for this..if not what shud i use? Plz help...Thanks in advance !

    Read the article

  • Implementing PageRank using MapReduce

    - by Nick D.
    Hello, I'm trying to get my head around an issue with the theory of implementing the PageRank with MapReduce. I have the following simple scenario with three nodes: A B C. The adjacency matrix is here: A { B, C } B { A } The PageRank for B for example is equal to: (1-d)/N + d ( PR(A) / C(A) ) N = number of incoming links to B PR(A) = PageRank of incoming link A C(A) = number of outgoing links from page A I am fine with all the schematics and how the mapper and reducer would work but I cannot get my head around how at the time of calculation by the reducer, C(A) would be known. How will the reducer, when calculating the PageRank of B by aggregating the incoming links to B will know the number of outgoing links from each page. Does this require a lookup in some external data source?

    Read the article

  • URLRewriter.net fails relative paths when using more than one substring in URL

    - by Andreas Strandfelt
    Hi. I have installed the URLRewriter on my server, and it works fine, but I have a rather big problem. Relative links in hyperlinks, CSS-links, images etc. doesn't work when I have URLs with more than one substring. E.g. (sorry, no http:// in front, as I do not have enough reputation): dkbyg.strandweb.dk/Leje-og-udlejning-arbejdskraft leads to the path dkbyg.strandweb.dk/Workers.aspx and works just fine. But dkbyg.strandweb.dk/Leje-og-udlejning-arbejdskraft/Midtjylland leads to dkbyg.strandweb.dk/Workers.aspx?Region=Midtjylland using this line in the Web.config: <rewrite url="~/Leje-og-udlejning-arbejdskraft/(.+)" to="~/Workers.aspx?Region=$1"/> It rewrites just fine, but my relative links doesn't work anymore. CSS, Images, links and so on thinks my root is now http://dkbyg.strandweb.dk/Leje-og-udlejning-arbejdskraft, which of course doesn't exist. Can't this be fixed? All my links are correctly set using the ~/, like this: <asp:HyperLink ID="HyperLink3" CssClass="black_text" NavigateUrl="~/Forgot-Password" runat="server">I have forgotten my password</asp:HyperLink>

    Read the article

  • Kohana 3 jQuery/AJAX request not working

    - by dscher
    I am trying to post some data to a controller in Kohana 3 using the jQuery AJAX method. I seem to have an issue with the data not getting to where I want it to be. I want the data to go to the /application/classes/controller/stock.php file where the file will process the data. I can't seem to figure this one out. Hopefully someone can help. My jQuery ajax call is: $.ajax({ type: 'POST', url: 'add_stock', data: { 'links': 'link_array' } }); 'add_stock' is the name of the action within the controller. I didn't know what else to try. I've also tried '.' and './' hoping that would be right but it's not. In Firebug, although it says the request was 200 OK, I see that the "RESPONSE" is "Failed to load source for: http://localhost/ddm/v2/stocks/add_stock" and my script in my controller which grabs the data isn't working. Here is that code in case it helps: $links = $_POST['links']; $link_obj = Jelly::factory('link') ->set('stock', $stock->id) ->set('links', $links); $link_obj->save(); I think that the problem is that I'm giving the ajax call the ROUTE and not the actual page it needs to deliver the POST data to. I just can't figure it out here. Any help?

    Read the article

  • Why modules link is not coming on Administrator Main page?

    - by Nitz
    I have added two modules in my drupal site called.... 1. me alias 2. Mime mail Whenever we add new modules to our site, it has links on admin page. but after activating modules, i can't see the links on admin page. but this happens only on server but on my localhost i can see the modules links on admin page. i have put screenshot of the problem, in image you can see that, i have activated both modules but on admin page....i can't see its links. why is not activated. i have tried to clear all the cache and then try again but its not coming then also. Here is the screen shot

    Read the article

  • Disabling a link tag using JavaScript on my printable page

    - by Fiona Holder
    I have a script that creates a printable page by copying the HTML across and then doing some manipulation, such as disabling the buttons on the page, on page load. I also want to disable the links on the page. I don't really mind if they look like links still as long as they don't do anything, and don't give any JavaScript errors! The anchor tag doesn't seem to have a disabled attribute... Unfortunately, I can't use jQuery, so JavaScript only please! Edit: I want to disable the links, buttons etc on the page so that when the 'printable page' opens in another window, the user cannot mess with it by clicking buttons or links. I want it to essentially be a 'frozen snapshot' of the page that they want to print.

    Read the article

  • d3.js force layout increase linkDistance

    - by user1159833
    How to increase linkDistance without affecting the node alignment, example: http://mbostock.github.com/d3/talk/20110921/force.html when I try to increase the circle radius and linkDistance the it collapse <script type="text/javascript"> var w = 1280, h = 800, z = d3.scale.category20c(); var force = d3.layout.force() .size([w, h]); var svg = d3.select("#chart").append("svg:svg") .attr("width", w) .attr("height", h); svg.append("svg:rect") .attr("width", w) .attr("height", h); d3.json("flare.json", function(root) { var nodes = flatten(root), links = d3.layout.tree().links(nodes); force .nodes(nodes) .links(links) .start(); var link = svg.selectAll("line") .data(links) .enter().insert("svg:line") .style("stroke", "#999") .style("stroke-width", "1px"); var node = svg.selectAll("circle.node") .data(nodes) .enter().append("svg:circle") .attr("r", 4.5) .style("fill", function(d) { return z(d.parent && d.parent.name); }) .style("stroke", "#000") .call(force.drag); force.on("tick", function(e) { link.attr("x1", function(d) { return d.source.x; }) .attr("y1", function(d) { return d.source.y; }) .attr("x2", function(d) { return d.target.x; }) .attr("y2", function(d) { return d.target.y; }); node.attr("cx", function(d) { return d.x; }) .attr("cy", function(d) { return d.y; }); }); }); function flatten(root) { var nodes = []; function recurse(node, depth) { if (node.children) { node.children.forEach(function(child) { child.parent = node; recurse(child, depth + 1); }); } node.depth = depth; nodes.push(node); } recurse(root, 1); return nodes; } </script>

    Read the article

  • one page filter results in new page in javascript

    - by Jake
    I have links set up on one page and the relationship between the links is a parent child relationship. (For example: Parent: All, Children: Software; Hardware) These links of course lead the user to a new page that shows the results from a table that is populated. Currently these links are all Similar destinations, but just a filter in the url. But the problem is that there is a javascript filter on the page that gives the user to choose between All, Software, or Hardware. Understand basically that if the url is still reading that there on the software page but they just filtered on the page to be Hardware that doesn't look good IMO. So what I was trying to do was make the links on the inital page all go the the exact same destination and somehow still know on the new page which link was clicked and run the javascript filter from knowing which link was clicked on that page. Is there a way to found that out from javascript? I guess a way to pass that value to the new page and retrieving it in javascript without showing it in the url so I can filter the table for the user based on that value?

    Read the article

  • JQuery TableSorter with Pager and LightWindow problem

    - by Maxim
    Hi 2ALL... I've a tablesorter with attached pager plugin on my page with links 'Details' in the one of the cell. Links have a class='lightwindow' and after clicking is rising up a LightWindow script with a window. So it work's vell on the First Page .. when i click Next Page on SortTable.Pager and clickin on my link 'Details' it's doesnt work correctly, it looks like my links lost their class='lightwindow'. Any suggestions?

    Read the article

  • Hyperlinks to download files without stopping the current page load

    - by Evgeny
    I've got an ASP.NET page that takes a long time to download and returns partial results as it's loading (as per my previous question). On the page I have some links to download files, ie. the response headers contain "Content-Disposition: attachment", so that the browser doesn't navigate away from the page. However, if the user clicks one of these links while the page is still loading it stops loading - normal behaviour, but not what I want in this case. I can get around that by adding target=_"blank" to the links, but this momentarily opens a new window and the closes it again (once the browser realises it's an "attachment"). Is there any way to avoid having those links stop the current page load without this new window trick? JavaScript is OK.

    Read the article

  • ASP.NET Caching : Good As Well As Bad ! Page shows old content!

    - by Shyju
    I have an ASP.NET website where i have implemented page level caching using the OutPutCache directive.This boosted the page performance.My pages has few parts(Some buttons,links and labels) which are specific to the logged in user.If user is not logged in,they will see different links.Now Since i implemented the page level caching,Even after the user logged in,It's showing the old page content(Links and buttons meant for the Non logged in User). Caching is obviously good.But how to get rid of this problem ? Do i need to completely remove caching ?

    Read the article

  • SQLAlchemy: select over multiple tables

    - by ahojnnes
    Hi, I wanted to optimize my database query: link_list = select( columns=[link_table.c.rating, link_table.c.url, link_table.c.donations_in], whereclause=and_( not_(link_table.c.id.in_( select( columns=[request_table.c.recipient], whereclause=request_table.c.donator==donator.id ).as_scalar() )), link_table.c.id!=donator.id, ), limit=20, ).execute().fetchall() and tried to merge those two selects in one query: link_list = select( columns=[link_table.c.rating, link_table.c.url, link_table.c.donations_in], whereclause=and_( link_table.c.active==True, link_table.c.id!=donator.id, request_table.c.donator==donator.id, link_table.c.id!=request_table.c.recipient, ), limit=20, order_by=[link_table.c.rating.desc()] ).execute().fetchall() the database-schema looks like: link_table = Table('links', metadata, Column('id', Integer, primary_key=True, autoincrement=True), Column('url', Unicode(250), index=True, unique=True), Column('registration_date', DateTime), Column('donations_in', Integer), Column('active', Boolean), ) request_table = Table('requests', metadata, Column('id', Integer, primary_key=True, autoincrement=True), Column('recipient', Integer, ForeignKey('links.id')), Column('donator', Integer, ForeignKey('links.id')), Column('date', DateTime), ) There are several links (donator) in request_table pointing to one link in the link_table. I want to have links from link_table, which are not yet "requested". But this does not work. Is it actually possible, what I'm trying to do? If so, how would you do that? Thank you very much in advance!

    Read the article

  • Change alexa tracking from artcrew.ro to www.artcrew.ro

    - by DanTdr
    my website has a redirect from artcrew.ro to www.artcrew.ro but for some reason, alexa gets only the inbound links from the one without www in front, on the one with www in front i have over 2000 inbound links but on the one without www i have only 10. is there any way i could make alexa see the other inbound links? that would be grate. thanks

    Read the article

< Previous Page | 50 51 52 53 54 55 56 57 58 59 60 61  | Next Page >