Search Results

Search found 9713 results on 389 pages for 'dead links'.

Page 88/389 | < Previous Page | 84 85 86 87 88 89 90 91 92 93 94 95  | Next Page >

  • Navigation view system with webview problem with touches!

    - by Gonçalo Falcão
    Hello i have search everything and i didn't figure this out! I have a tab bar controller with 5 navigation controlls, in one of the navigation control, i have a view, with a table view inside, and when i click that item i push a new view, that view have view -webview -view i create that second view(is transperant) because i need to handle a single tap to hide my toolbar and navigation bar, and the webview was eating all the touches! I put that view and implement on the view controller -(void) touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event{ UITouch* touch = [touches anyObject]; if(touch.tapCount == 2){ [NSObject cancelPreviousPerformRequestsWithTarget:self]; } [[wv.subviews objectAtIndex:0] touchesBegan:touches withEvent:event]; [super touchesBegan:touches withEvent:event]; } -(void) touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event{ [[wv.subviews objectAtIndex:0] touchesMoved:touches withEvent:event]; [super touchesMoved:touches withEvent:event]; } -(void) touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event{ UITouch* touch = [touches anyObject]; if(touch.tapCount == 1){ [self performSelector:@selector(hideBars) withObject:nil afterDelay:0.3]; } [[wv.subviews objectAtIndex:0] touchesEnded:touches withEvent:event]; [super touchesEnded:touches withEvent:event]; } -(void) touchesCancelled:(NSSet *)touches withEvent:(UIEvent *)event{ [[wv.subviews objectAtIndex:0] touchesCancelled:touches withEvent:event]; [super touchesCancelled:touches withEvent:event]; } wv is my UIWebView IBOutlet now i can get the the touches in my controller and send them to my webview. So i thought everything was working, i'm able to scroll, but now when i have links i'm not able to click them. And the webview is detecting the links i have made that test. So any other way to implements this to get the touches in the links, or i should change this workaround to hide the toolbars so i can have the full functionability of the webview? Thks for the help in advance.

    Read the article

  • Determining failing sectors on portable flash memory

    - by Faxwell Mingleton
    I'm trying to write a program that will detect signs of failure for portable flash memory devices (thumb drives, etc). I have seen tools in the past that are able to detect failing sectors and other kinds of trouble on conventional mechanical hard drives, but I fear that flash memory does not have the same kind of predictable low-level access to the hardware due to the internal workings of the storage. Things like wear-leveling and other block-remapping techniques (to skip over 'dead' sectors?) lead me to believe that determining if a flash drive is failing will be difficult at best, if not impossible (short of having constant read failures and device unmounts). Flash drives at their end-of-life should be easy to detect (constant CRC discrepancies during reads and all-out failure). But what about drives that might be failing early? Are there any tell-tale signs like slower throughput speeds that might indicate a flash drive is going to fail much sooner than normal? Along the lines of detecting potentially bad blocks, I had considered attempting random reads/writes to a file close to or exactly the size of the entire volume, but even then is it possible that the drive might report sizes under its maximum capacity to account for 'dead' blocks? In short, is there any way to circumvent or at least detect (algorithmically or otherwise) the use of block-remapping or other life extension techniques for flash memory? Let me end this question by expressing my uncertainty as to whether or not this belongs on serverfault.com . This is definitely a hardware-related question, but I also desire a software solution - preferably one that I can program myself. If this question is misplaced, I will be happy to migrate it to serverfault - but I do need a programming solution. Please let me know if you need clarification :) Thanks!

    Read the article

  • Detecting death of spawned process using Window CRT

    - by Michael Tiller
    Executive summary: I need a way to determine whether a Windows process I've spawned via _spawnl and am communicating with using FDs from _pipe has died. Details: I'm using the low-level CRT function in Windows (_eof, _read) to communicate with a process that was spawned via a call to _spawnl (with the P_NOWAIT) flag. I'm using _pipe to create file descriptors to communicate with this spawned process and passing those descriptors (the FD #) to it on the command line. It is worth mentioning that I don't control the spawned process. It's a black box to me. It turns out that the process we are spawning occasionally crashes. I'm trying to make my code robust to this by detecting the crash. Unfortunately, I can't see a way to do this. It seems reasonable to me to expect that a call to _eof or _read on one of those descriptors would return an error status (-1) if the process had died. Unfortunately, that isn't the case. It appears that the descriptors have a life of their own independent of the spawned process. So even though the process on the other end is dead, I get no error status on the file descriptor I'm using to communicate with it. I've got the PID for the nested process (returned from the _spanwnl call) but I don't see anything I can do with that. My code works really well except for one thing. I can't detect whether the spawned process is simply busy computing me an answer or has died. If I can use the information from _pipe and _spawnl to determine if the spawned process is dead, I'll be golden. Suggestions very welcome. Thanks in advance. UPDATE: I found a fairly simple solution and added it as the selected answer.

    Read the article

  • Server.Execute(path).. executed page returns the calling pages' url from request.url..

    - by ClarkeyBoy
    Hey, As explained in the title, I am having a problem with getting the URL of the page being executed from within a page. Basically I have a dynamic catalogue, where customers select products they are interested in. The manager of the company I am doing this for would like to be able to create an up to date offline catalogue at any given time, to send out to customers who dont have an internet connection. So far its going really well. I am using Server.Execute to get the content for each page, then putting it in static html pages and changing the dynamic links to static html links (ie changing all aspx links to htm). I am able to output all the pages for about us, contact us, home, and the entire catalogue. However, one of the stylesheets which is included in the page based on the URL (if the page is in the administration section then it is not included, otherwise it is) is not being included in the pages when it should be. I have tried outputting the URL but it just returns the URL of the calling page, not the page being called. Does anyone have any idea why this is happening? Any help would be greatly appreciated. Regards, Richard Clarke

    Read the article

  • How to change font color inside nav element?

    - by user2924752
    I have a element and I want to change the color of the links within it, but all other links on my page are styled using the following CSS: a:link { color:#22b14c; text-decoration:none; } and here is the nav: <nav id="Nav"> <a href="index.html">Home</a> | <a href="Gallery.html">Library</a> | <a href="Contact.html">Contact</a> | <a href="About.html">About</a> </nav> and the nav css: #Nav { margin-top: 20px; background-color: #000000; color: #f2f2f2; font-size: 40px; font-family: "Calibri"; text-align: center; } I tried a span inside the nav element but that didn't work. How can I change the color for these links only inside the element?

    Read the article

  • Open PDF Content files in ASP.NET MVC 2

    - by mcbingo
    I want to provide simple href links to my PDF forms that reside in my Forms folder. I have a created a simple Index.aspx and FormController Index action that simple iterates through the list of PDF files using my FormMetaData.xml file. The links get created just fine but when you click on the links I get a 404 exception. That looks like this: Server Error in '/' Application. The resource cannot be found. Description: HTTP 404. The resource you are looking for (or one of its dependencies) could have been removed, had its name changed, or is temporarily unavailable. Please review the following URL and make sure that it is spelled correctly. Requested URL: /Forms/ccindteamgolfform.pdf Version Information: Microsoft .NET Framework Version:2.0.50727.4927; ASP.NET Version:2.0.50727.4927 This seems like this should open up a new browser window with the PDF in it but perhaps I am making a bad assumption. The PDF content files have Build Action of Content and Copy to Output set to Copy Always. Here is an example output html for the link from my Index.aspx page: <span class="form"> <a href="Forms/ccindteamgolfform.pdf" target="_blank"> <span class="description">Entry Form</span></span> I must be missing something because this does not work. Do I need to add a MapRoute for these documents? Or am I missing something else with the routing? This seems like it should not be that difficult.

    Read the article

  • How to insert and call by row and column into sqlite3 python

    - by user291071
    Lets say i have a simple array of x rows and y columns with corresponding values, What is the best method to do 3 things? How to insert, update a value at a specific row column? How to select a value for each row and column, import sqlite3 con = sqlite3.connect('simple.db') c = con.cursor() c.execute('''create table simple (links text)''') con.commit() dic = {'x1':{'y1':1.0,'y2':0.0},'x2':{'y1':0.0,'y2':2.0,'y3':1.5},'x3':{'y2':2.0,'y3':1.5}} ucols = {} ## my current thoughts are collect all row values and all column values from dic and populate table row and columns accordingly how to call by row and column i havn't figured out yet ##populate rows in first column for row in dic: print row c.execute("""insert into simple ('links') values ('%s')"""%row) con.commit() ##unique columns for row in dic: print row for col in dic[row]: print col ucols[col]=dic[row][col] ##populate columns for col in ucols: print col c.execute("alter table simple add column '%s' 'float'" % col) con.commit() #functions needed ##insert values into sql by row x and column y?how to do this e.g. x1 and y2 should put in 0.0 ##I tried as follows didn't work for row in dic: for col in dic[row]: val =dic[row][col] c.execute("""update simple SET '%s' = '%f' WHERE 'links'='%s'"""%(col,val,row)) con.commit() ##update value at a specific row x and column y? ## select a value at a specific row x and column y?

    Read the article

  • Basic compile issue with QT4

    - by Cobus Kruger
    I've been trying to get a dead simple listing from a university textbook to compile with the newest QT SDK for Windows I downloaded last night. After struggling through the regular nonsense (no make.bat, need to manually add environment variables and so on) I am finally at the point where I can build. But only one of the two libraries seem to work. The .pro file I use is dead simple: SUBDIRS += utils \ dataobjects TEMPLATE = subdirs In each of these two subfolders I have the source for a library. Running QMAKE generates a makefile and running Make runs through all the preliminaries and then fails on the g++ call: g++ -enable-stdcall-fixup -Wl,-enable-auto-import -Wl,-enable-runtime-pseudo-reloc --out-implib,libdataobjects.a -shared -mthreads -Wl -Wl,--out-implib,c:\Users\Cobus\workspace\lib\libdataobjects.a -o ..\..\lib\dataobjects.dll object_script.dataobjects.Debug -L"c:\Users\Cobus\Portab~1\Qt\2010.02.1\qt\lib" -LC:\Users\Cobus\workspace\lib -lutils -lQtXmld4 -lQtGuid4 -lQtCored4 c:/users/cobus/portab~1/qt/2010.02.1/mingw/bin/../lib/gcc/mingw32/4.4.0/../../../../mingw32/bin/ld.exe: cannot find -lutils The problem seems to be right near the end of the command line, where -lutils is added, indicating that there is a library by the name of utils. While I would have expected to see that, you'll notice the library names after --out include lib in the name, so they become libutils and libdataobjects. I have tried to figure out why this is happening, to no avail. Anyone have an idea what's going on?

    Read the article

  • Why should I use Entity Framework over Linq2SQL ...

    - by Refracted Paladin
    To be clear, I am not asking for a side by side comparision which has already been asked Ad Nauseum here on SO. I am also Not asking if Linq2Sql is dead as I don't care. What I am asking is this.... I am building internal apps only for a non-profit organization. I am the only developer on staff. We ALWAYS use SQL Server as our Database backend. I design and build the Databases as well. I have used L2S successfully a couple of times already. Taking all this into consideration can someone offer me a compelling reason to use EF instead of L2S? I was at Code Camp this weekend and after an hour long demonstration on EF, all of which I could have done in L2S, I asked this same question. The speakers answer was, "L2S is dead..." Very well then! NOT! (see here) I understand EF is what MS WANTS us to use in the future(see here) and that it offers many more customization options. What I can't figure out is if any of that should, or does, matter for me in this environment. One particular issue we have here is that I inherited the Core App which was built on 4 different SQL Data bases. L2S has great difficulty with this but when I asked the aforementioned speaker if EF would help me in this regard he said "No!"

    Read the article

  • How do set up jquery to exclude classes in a function?

    - by user1497158
    I essentially only understand how to read bits of javascript and make modifications. I am using grid-slider, a script I purchased, but the code writer is mia at the moment, so hopefully someone here can help me. Basically, it's a slider and there are options to have links open like normal or to have links open in a panel on the same page. I want some links to open in a panel and others to open in the parent window. It seems to me that all that would be required to do this would be to activate the panel display function (which I've done) and then set up an exclude function to exclude certain uls or lis with a specific class from the function. I've read about the .not selector, but I don't see how to make it applicable to this code: enter code hereelse { enter code here if (this._displayOverlay) { if ($item.find(">.content").size() > 0) { $item.data("type", "static"); } else { var contentType = this.getContentType($link); var url = $link.attr("href"); $item.data({type:contentType, url:(typeof url != "undefined") ? url : ""}); } $item.css("cursor", "pointer").bind("click", {elem:this, i:i}, this.openOverlay); } $link.data("text", $item.find(">div:first").html()); $img = $link.find(">img"); } Can anyone help based on looking at this? Here's a link to the demo of the code: http://codecanyon.net/item/jquery-grid-style-slider/full_screen_preview/1204040 Thank you.

    Read the article

  • What techniques can be used to detect so called "black holes" (a spider trap) when creating a web crawler?

    - by Tom
    When creating a web crawler, you have to design somekind of system that gathers links and add them to a queue. Some, if not most, of these links will be dynamic, which appear to be different, but do not add any value as they are specifically created to fool crawlers. An example: We tell our crawler to crawl the domain evil.com by entering an initial lookup URL. Lets assume we let it crawl the front page initially, evil.com/index The returned HTML will contain several "unique" links: evil.com/somePageOne evil.com/somePageTwo evil.com/somePageThree The crawler will add these to the buffer of uncrawled URLs. When somePageOne is being crawled, the crawler receives more URLs: evil.com/someSubPageOne evil.com/someSubPageTwo These appear to be unique, and so they are. They are unique in the sense that the returned content is different from previous pages and that the URL is new to the crawler, however it appears that this is only because the developer has made a "loop trap" or "black hole". The crawler will add this new sub page, and the sub page will have another sub page, which will also be added. This process can go on infinitely. The content of each page is unique, but totally useless (it is randomly generated text, or text pulled from a random source). Our crawler will keep finding new pages, which we actually are not interested in. These loop traps are very difficult to find, and if your crawler does not have anything to prevent them in place, it will get stuck on a certain domain for infinity. My question is, what techniques can be used to detect so called black holes? One of the most common answers I have heard is the introduction of a limit on the amount of pages to be crawled. However, I cannot see how this can be a reliable technique when you do not know what kind of site is to be crawled. A legit site, like Wikipedia, can have hundreds of thousands of pages. Such limit could return a false positive for these kind of sites. Any feedback is appreciated. Thanks.

    Read the article

  • HATEOAS - Discovery and URI Templating

    - by Paul Kirby
    I'm designing a HATEOAS API for internal data at my company, but have been having troubles with the discovery of links. Consider the following set of steps for someone to retrieve information about a specific employee in this system: User sends GET to http://coredata/ to get all available resources, returns a number of links including one tagged as rel = "http://coredata/rels/employees" User follows HREF on the rel from the first request, performing a GET at (for example) http://coredata/employees The data returned from this last call is my conundrum and a situation where I've heard mixed suggestions. Here are some of them: That GET will return all employees (with perhaps truncated data), and the client would be responsible for picking the one it wants from that list. That GET would return a number of URI templated links describing how to query / get one employee / get all employees. Something like: "_links": { "http://coredata/rels/employees#RetrieveOne": { "href": "http://coredata/employees/{id}" }, "http://coredata/rels/employees#Query": { "href": "http://coredata/employees{?login,firstName,lastName}" }, "http://coredata/rels/employees#All": { "href": "http://coredata/employees/all" } } I'm a little stuck here with what remains closest to HATEOAS. For option 1, I really do not want to make my clients retrieve all employees every time for the sake of navigation, but I can see how using URI templating in example two introduces some out-of-band knowledge. My other thought was to use the RetrieveOne, Query, and All operations as my cool URLs, but that seems to violate the concept that you should be able to navigate to the resources you want from one base URI. Has anyone else managed to come up with a good way to handle this? Navigation is dead simple once you've retrieved one resource or a set of resources, but it seems very difficult to use for discovery.

    Read the article

  • Why use Entity Framework over Linq2SQL ...

    - by Refracted Paladin
    To be clear, I am not asking for a side by side comparision which has already been asked Ad Nauseum here on SO. I am also Not asking if Linq2Sql is dead as I don't care. What I am asking is this.... I am building internal apps only for a non-profit organization. I am the only developer on staff. We ALWAYS use SQL Server as our Database backend. I design and build the Databases as well. I have used L2S successfully a couple of times already. Taking all this into consideration can someone offer me a compelling reason to use EF instead of L2S? I was at Code Camp this weekend and after an hour long demonstration on EF, all of which I could have done in L2S, I asked this same question. The speakers answer was, "L2S is dead..." Very well then! NOT! (see here) I understand EF is what MS WANTS us to use in the future(see here) and that it offers many more customization options. What I can't figure out is if any of that should, or does, matter for me in this environment. One particular issue we have here is that I inherited the Core App which was built on 4 different SQL Data bases. L2S has great difficulty with this but when I asked the aforementioned speaker if EF would help me in this regard he said "No!"

    Read the article

  • Losing URI segments when paginating with CodeIgniter

    - by Danny Herran
    I have a /payments interface where the user should be able to filter via price range, bank, and other stuff. Those filters are standard select boxes. When I submit the filter form, all the post data goes to another method called payments/search. That method performs the validation, saves the post values into a session flashdata and redirects the user back to /payments passing the flashdata name via URL. So my standard pagination links with no filters are exactly like this: payments/index/20/ payments/index/40/ payments/index/60/ And if you submit the filter form, the returning URL is: payments/index/0/b48c7cbd5489129a337b0a24f830fd93 This works just great. If I change the zero for something else, it paginates just fine. The only issue however is that the << 1 2 3 4 page links wont keep the hash after the pagination offset. CodeIgniter is generating the page links ignoring that additional uri segment. My uri_segment config is already set to 3: $config['uri_segment'] = 3; I cannot set the page offset to 4 because that hash may or may not exists. Any ideas of how can I solve this? Is it mandatory for CI to have the offset as the last segment in the uri? Maybe I am trying an incorrect approach, so I am all ears. Thank you folks.

    Read the article

  • Emails sometimes get scrambled

    - by Alex
    Folks, I have a PHP-based based site (using the QCubed framework); as a part of the site, I have a daemon that's sending out several thousand emails a day (no i'm not a spammer, everything is opt-in :)). Emails are sent through a custom framework component; that component serves as an SMTP client. I'm using a paid SMTP gateway from DNSExit.com to get the emails actually delivered. Those emails are simple HTML-based emails; they really have just simple links inside. My issue is that these links sometimes (not consistently!) get scrambled during transition. Tags somehow get mixed up, and some links are non-functional in the email. The issue happens on a small percentage of all sent emails; it is not consistent (i.e. the same exact source message HTML may or may not cause the scrambling in transition). Have any of you seen this? Any thoughts on how to troubleshoot?

    Read the article

  • How to prevent session hijacking with SID (CGI perl)

    - by Gnippots
    I have a web app used by a small number of people (internal only) and am using a randomised sessionID that is stored under the user record and placed in various links. I have had a problem where users are sending links to each other which is allowing them to hijack the sender's session. What are some ways of preventing this from happening while still letting users send links to one another? Edit: The session ID in the link (which also contains $username) is just compared to what is stored in the User table. &incorrectLogin just prints an error followed by die; if ($sid) { $sth = $dbh->prepare("SELECT * FROM tbl_User WHERE UserID = '$username'"); $sth->execute(); $ref = $sth->fetchrow_hashref(); $session_chk = $ref->{'usr_sessionID'}; unless ($sid eq $session_chk) {&incorrectLogin;} } The problem is that if someone uses a link that is created by someone else, the page will load as them. I am not using cookies, and I recall being told in the past that CGI perl cookie handling is quite poor.

    Read the article

  • two controllers in one layout, rails 3

    - by Grizlord
    Okay, I have two models, a recipe model and a category model. In my layout(application.html.erb) I have a main container div that "yields" the recipes index action. I'm trying to list all the category names as links in a side bar(also a div) by iterating over them in an unordered list. When you click one of the links it will go to the category show page which will then list all the recipes in that category. Here is how I'm trying to list the links in - <div class="container" id="categories"> <% for category in @categories %> <ul> <li><%= link_to category.name, category %></li> </ul> <% end %> </div> The problem is I get a NoMethodError - You have a nil object when you didn't expect it! You might have expected an instance of Array. The error occurred while evaluating nil.each It is not retrieving the records from the model. Any suggestions on how to get this done would be greatly appreciated. I tried to render a partial as some of the other similar posts have said but still get the same error. This is the exact error - NoMethodError in Recipes#index Showing /Users/grizlord/Rails/recipe2/app/views/layouts/application.html.erb where line #39 raised: You have a nil object when you didn't expect it! You might have expected an instance of Array. The error occurred while evaluating nil.each Extracted source (around line #39): 36: </div> 37: <div class="container" id="categories"> 38: Browse by Category 39: <% for category in @categories %> 40: <ul> 41: <li><%= link_to category.name, category %></li> 42: </ul>

    Read the article

  • Javascript and rendering pauses and stays paused on scroll in the android browser

    - by user357303
    Hi. I've found some wierd behaviour related to scrolling and rendering and javascript. How to make it happen: On any webpage that is long enough to scroll on. Start to scroll pretty fast (fling the page). then release the touch. No while the page is still scrolling because of the momentum. Tap the screen to stop the scroll. This make the browser enter a wierd mode. On the nexus one it behaves like this: The updating of what's shown on the screen stops, you can still click on links and the go to where they are supposed to but what's shown on the screen stays the same. If you then scroll the screen a bit the update of the screen kicks in again and what you you where suppsed to see all the time is shown. On all phones with HTC Sense I've tried (Hero, Desire, Legend) this happens: The updating of the screen is stopped just like on the nexus one, but also the execution of any javascript is stopped. If you click on a link that takes you to another page however things return to normal again. The way I tested this was I created a page like this: http://pastebin.ca/1881620 The changeColor function simply changed the background color of 'container' to a few different colors. So before the error what happens is that when you click any link the color changes. after the error this happens: Nexus one: when you click on the links nothing happens (except the "orange link selected rounded corner box thing" is shown as if the link is clicked). Then when you scroll abit. You can see the color has changed (and equal amount of times to the number of times I clicked the link). On Sense: The links take me to google.com Has anyone else noticed this problem? Is there anyway to work around it? Thanks.

    Read the article

  • How can I generate a FindBugs report that shows me the bugs removed between two revisions in the bug

    - by David Deschenes
    I am attempting to execute a combination of the FindBugs commands filterBugs and convertXmlToText, against a bug database that I created, to generate a report that shows me the all of the bugs removed between two revisions of the system that I am working on. Unfortunately, the resulting report does not show any bug details. It appears that the convertXmlToText throws away all bugs that are dead (aka inactive)... the exact set of bugs that I'd like to see. Below is what I see when I pass the results of the filterBugs command to the mineBugHistory command: build/findbugs/bin> ./filterBugs -before r39921 -absent r41558 -active:false ../../../mmfg/bugDB-2.xml | ./mineBugHistory seq version time classes NCSS added newCode fixed removed retained dead active 0 r39764 1271169398000 438 74069 0 64 0 0 0 0 64 1 r39921 1271186932000 441 74333 0 0 22 0 42 0 42 2 r40149 1271185876000 449 74636 0 0 3 0 39 22 39 3 r40344 1271180332000 452 74789 0 0 7 0 32 25 32 4 r40558 1271179612000 452 74806 0 0 1 0 31 32 31 5 r40793 1271178818000 464 75610 0 0 20 0 11 33 11 6 r41016 1271176154000 467 75712 0 0 4 0 7 53 7 7 r41303 1271175616000 481 76931 0 0 7 0 0 57 0 8 r41558 1271175026000 486 77793 0 0 0 0 0 64 0 What I'd like to see in the HTML report is the list of the 64 bugs that are shown as active in version r39764 (sequence # 0). Below is the command line that I am using to generate the HTML report: build/findbugs/bin> ./filterBugs -before r39921 -absent r41558 -active:false ../../../mmfg/bugDB-2.xml | ./convertXmlToText -html:fancy-hist.xsl > ../../../mmfg/bugDB-removed.html

    Read the article

  • node.js UDP data lost at high package rates

    - by koleto
    I am observing a significant data-lost on a UDP connection with node.js 0.6.18 and 0.8.0 . It appears at high packet rates about 1200 packet per second with frames about 1500 byte limit. Each data packages has a incrementing number so it easy to track the number of lost packages. var server = dgram.createSocket("udp4"); server.on("message", function (message, rinfo) { //~processData(message); //~ writeData(message, null, 5000); }).bind(10001); On the receiving callback I tested two cases I first saved 5000 packages in a file. The result ware no dropped packages. After I have included a data processing routine and got about 50% drop rate. What I expected was that the process data routine should be completely asynchronous and should not introduce dead time to the system, since it is a simple parser to process binary data in the package and to emits events to a further processing routine. It seems that the parsing routine introduce dead time in which the event handler is unable to handle each packets. At the low package rates (< 1200 packages/sec) there are no data lost observed! Is this a bug or I am doing something wrong?

    Read the article

  • Is this a bug? Or is it a setting in ASP.NET 4 (or MVC 2)?

    - by John Gietzen
    I just recently started trying out T4MVC and I like the idea of eliminating magic strings. However, when trying to use it on my master page for my stylesheets, I get this: <link href="<%: Links.Content.site_css %>" rel="stylesheet" type="text/css" /> rending like this: <link href="&lt;%: Links.Content.site_css %>" rel="stylesheet" type="text/css" /> Whereas these render correctly: <link href="<%: Url.Content("~/Content/Site.css") %>" rel="stylesheet" type="text/css" /> <link href="<%: Links.Content.site_css + "" %>" rel="stylesheet" type="text/css" /> It appears that, as long as I have double quotes inside of the code segment, it works. But when I put anything else in there, it escapes the leading "less than". Is this something I can turn off? Is this a bug? Edit: This does not happen for <script src="..." ... />, nor does it happen for <a href="...">. Edit 2: Minimal case: <link href="<%: string.Empty %>" /> vs <link href="<%: "" %>" />

    Read the article

  • How can I evaluate the connectedness of my nodes?

    - by Travis Leleu
    I've got a space that has nodes that are all interconnected, based on a "similarity score". I would like to determine how "connected" a node is with the others. My purpose is to find nodes that are poorly connected to make sure that the backlink from the other node is prioritized. Perhaps an example would help. I've got a web page that links to my other pages based on a similarity score. Suppose I have the pages: A, B, C, ... A has a backlink from every other page, so it's very well connected. It also has links to all my other pages (each line in the graph is essentially bidirectional). B only has 1 backlink, from A. C has a link from A and D. I would like to make sure that the A-B link is prioritized over the A-C link (even if the similarity score between C and A is higher than B and A). In short, I would like to evaluate which nodes are least and best connected, so that I can mangle the results to my means. I believe this is Graph Connectedness, but I'm at a loss to develop a (simple) algorithm that will help me here. Simply counting the backlinks to a node may be a starting point -- but then how do I take the next step, which is to properly weight the links on the original node (A, in the example above)?

    Read the article

  • Visual Studio: Collapse Methods, but not Comments (Summary etc.)

    - by Alex
    Hello, is there a way (settings? "macro"? extension?) that I can simply toggle outlining so that only the using section and my methods collapse to their signature line, but my comments (summary and double slash comments) and classes stay expanded? Examples: 1) Uncollapsed using System; using MachineGun; namespace Animals { /// <summary> /// Angry animal /// Pretty Fast, too /// </summary> public partial class Lion { // // Dead or Alive public Boolean Alive; /// <summary> /// Bad bite /// </summary> public PieceOfAnimal Bite(Animal animalToBite) { return animalToBite.Shoulder; } /// <summary> /// Fatal bite /// </summary> public PieceOfAnimal Kill(Animal animalToKill) { return animalToKill.Head; } } } 2) Collapsed (the following is my desired result): using[...] namespace Animals { /// <summary> /// Angry animal /// Pretty Fast, too /// </summary> public partial class Lion { // // Dead or Alive public Boolean Alive; /// <summary> /// Bad bite /// </summary> public PieceOfAnimal Bite(Animal animalToBite)[...] /// <summary> /// Fatal bite /// </summary> public PieceOfAnimal Kill(Animal animalToKill)[...] } } This is how I prefer seeing my class files (the collapsed form). I've been doing the collapsing by hand a million times by now and I think there should be a way to automate/customize/extend VS to do it the way I want? Every time I debug/hit a breakpoint, it uncollapses and messes up things. If I collapse via the context menu's collapse to outline etc. it also collapses my comments which isn't desired. Appreciate your help!

    Read the article

  • How to insert and call by row and column into sqlite3 python, great tutorial problem.

    - by user291071
    Lets say i have a simple array of x rows and y columns with corresponding values, What is the best method to do 3 things? How to insert, update a value at a specific row column? How to select a value for each row and column, import sqlite3 con = sqlite3.connect('simple.db') c = con.cursor() c.execute('''create table simple (links text)''') con.commit() dic = {'x1':{'y1':1.0,'y2':0.0},'x2':{'y1':0.0,'y2':2.0,'y3':1.5},'x3':{'y2':2.0,'y3':1.5}} ucols = {} ## my current thoughts are collect all row values and all column values from dic and populate table row and columns accordingly how to call by row and column i havn't figured out yet ##populate rows in first column for row in dic: print row c.execute("""insert into simple ('links') values ('%s')"""%row) con.commit() ##unique columns for row in dic: print row for col in dic[row]: print col ucols[col]=dic[row][col] ##populate columns for col in ucols: print col c.execute("alter table simple add column '%s' 'float'" % col) con.commit() #functions needed ##insert values into sql by row x and column y?how to do this e.g. x1 and y2 should put in 0.0 ##I tried as follows didn't work for row in dic: for col in dic[row]: val =dic[row][col] c.execute("""update simple SET '%s' = '%f' WHERE 'links'='%s'"""%(col,val,row)) con.commit() ##update value at a specific row x and column y? ## select a value at a specific row x and column y?

    Read the article

  • Mixing has_one and has_and_belongs_to_many associations

    - by Thomas
    I'm trying to build a database of urls(links). I have a Category model that has and belongs to many Links. Here's the migration I ran: class CreateLinksCategories < ActiveRecord::Migration def self.up create_table :links_categories, :id => false do |t| t.references :link t.references :category end end def self.down drop_table :links_categories end end Here's the Link model: class Link < ActiveRecord::Base validates :path, :presence => true, :format => { :with => /^(#{URI::regexp(%w(http https))})$|^$/ } validates :name, :presence => true has_one :category end Here's the category model: class Category < ActiveRecord::Base has_and_belongs_to_many :links end And here's the error the console kicked back when I tried to associate the first link with the first category: >>link = Link.first => #<Link id: 1, path: "http://www.yahoo.com", created_at: "2011-01-10... >>category = Category.first => #<category id : 1, name: "News Site", created_at: "2011-01-11... >>link.category << category => ActiveRecord::StatementInvalid: SQLite3::Exception: no such column : categories.link_id: Are my associations wrong or am I missing something in the database? I expected it to find the links_categories table. Any help is appreciated.

    Read the article

< Previous Page | 84 85 86 87 88 89 90 91 92 93 94 95  | Next Page >