Search Results

Search found 39224 results on 1569 pages for 'content delivery network'.

Page 161/1569 | < Previous Page | 157 158 159 160 161 162 163 164 165 166 167 168  | Next Page >

  • Content Assist problem in eclipse 3.5

    - by user361996
    I've got a Content Assist problem in eclipse 3.5, which is eclipse-java-galileo-win32.zip. When I press Alt+/(I've changed Ctrl+Space to this) in Java Editor, no assistant tips are shown up. I've solved this problem and I'd like to share my experience. 1 Eclipse--Preferences--General--Keys; 2 Search 'Content Assist'; 3 You'll find that in 'When' Field, the default value is 'in dialogs and window', change it to 'Editing Java Source', then press 'OK' to save; I can not post a image, but here is a link: http://www.freeimagehosting.net/image.php?e0987ea847.png That's All.

    Read the article

  • Getting content of a Facebook page in Adobe Flex

    - by cuneyt
    Hi guys, I wrote a Flex application that sends a UrlRequest to Facebook and gets the content of page as a string. The application user clicks a button, and the application connects to Facebook. And no I do not mean using Facebook API. It is like a screen scraper. This application worked locally, but when deployed to server it gives a sandbox security error. I have my crossdomain.xml on the root, but I think the problem is not that. Not only Facebook, but I cannot get any web site when the application is deployed on server. What should I do to get the content of a remote web page?

    Read the article

  • Content Provider and Image storage

    - by Paru
    I want to share some image Icons between two applications. I stored the icons in a folder from Application 1 and tried use the folder from application 2. That time i got some permission issue. I was not able add the permission also because it is not a rooted device. So i am now trying to store the icons in a content provider. Is it possible to store the images in a Content provider ? Is there any other good method to implement this ? Please help.

    Read the article

  • Failing faster when URL content is not found, howto

    - by Jam
    I have a thread pool that loops over a bunch of pages and checks to see if some string is there or not. If String is found, or not found response is near instant, however if server is offline or application is not running getting a rejection seems to take seconds How can I change my code to fail faster? for (Thread thread : pool) { thread.start(); } for (Thread thread : pool) { try { thread.join(); } catch (InterruptedException e) { e.printStackTrace(); } } Here is my run method @Override public void run() { for (Box b : boxes) { try { connection = new URL(b.getUrl()).openConnection(); scanner = new Scanner(connection.getInputStream()); scanner.useDelimiter("\\Z"); content = scanner.next(); if (content.equals("YES")) { } else { System.out.println("\tFAILED ON " + b.getName() + " BAD APPLICATION STATE"); } } catch (Exception ex) { System.out.println("\tFAILED ON " + b.getName() + " BAD APPLICATION STATE"); } } }

    Read the article

  • Gaining Reference to the Dialog from Dialog content

    - by coffeeaddict
    Here's my scenario: I've got a div on Page A. When an event happens (whatever I define, a click, whatever), I do a div.Dialog, set its properties and open it The data inside is returned from an async .ajax call that grabs the data from a url and appends it to the div by calling div.html(data) inside the .ajax callback, so it's essentially loading a PageB by getting the data (content) and appending it to the div that's calling the dialog("open")...nothing special here I don't think. My question: In Page B, how do I reference the dialog so I can do some things to it? For instance in Page B's content that I received back from that .ajax call and added to the div via .html(data), there is a button and when clicked I need to close the dialog. Right now my buttons are not working because one of them closes out the dialog and the other should redirect to a new page but both do not work now because I have no reference to the dialog that it's in to manipulate it.

    Read the article

  • Background image JFrame with content

    - by Petr Safar
    I have a JFrame with BorderLayout, there are panels on all sides (North, East ,...). In the panels there are labels and buttons mostly. Now I want the frame to have a background image, some research told me that i had to change the content pane of my frame. When I try this however, the content gets put in the background and isn't visible. Also, I don't know how to resize the image if the frame is resized. Is there an easy fix for this or will I have to rework most of my code?

    Read the article

  • Combining Content Data in Google Analytics

    - by David Csonka
    When I first start one of my Wordpress blogs, I had the permanent URL for each post include the date of posting. The slug format looked like this: /blog/2010/01/25/this-is-my-article/ Later on, I changed it so that the date was not included in the permanent URL, like this: /blog/this-is-my-article/ and setup a redirect plugin to make sure that users would get to the page they wanted until the site was re-indexed. In Google Analytics, when I review the stats for content I now have multiple records for what is essentially the same page. ie: Top Content List: 45 Pageviews- /blog/this-is-my-article/ 24 Pageviews- /blog/2010/01/25/this-is-my-article/ 33 Pageviews- /blog/some-other-article/ Is there any way to combine those records somehow?

    Read the article

  • Using JQuery Lightbox within AJAX loaded content

    - by James
    Hey. So this is probably a very noob problem but I'm not good enough to fix it. Basically... I have a gallery that I am loading into the page via AJAX. It looks simply like this: <div id="gallery"> <a href="Image1.jpg"><img src="Image1Thumb.jpg" /></a> <a href="Image2.jpg" title=""><img src="Image2Thumb.jpg" /></a> </div> But because it's being loaded in as AJAX content, JQuery/Lightbox is not working and I've no idea how can I get the script to run/recognise this newly loaded content. Thanks! [Note: The JQuery Lightbox I am using.]

    Read the article

  • jquery/ajax load new content when available

    - by Tim
    I know this is quite a vague question -- sorry about that. If I have a forum, shoutbox or something similar where users enter and submit data, running on PHP and MySQL, what is the best way to have the newly submitted content automatically display on the page for all users when it is submitted? Much like a live newsfeed, if you like... The effect sort-of works here at stackoverflow, when you are answering a question you are told when new answer is submitted. I want to test for newly submitted content and then display it automatically. Any suggestions? Many thanks :)

    Read the article

  • Show AJAX content after images have loaded

    - by Ben4Himv
    I am developing my own lightbox kind of jquery plugin. Everything works but I want to hide the loaded content until the images have loaded in the browser from the AJAX call. I found a similar post and I am using the following script but the setTimeout function is what reveals the content and not the .load function. Am I trying to achieve the impossible? $.ajax({ url: 'meet/'+ pLoad + '.html', success: function(data) { var imageCount = $(data).filter('img').length; var imagesLoaded = 0; $(data).hide() .appendTo('#zoom_inner') .filter('img') .load( function() { ++imagesLoaded; if (imagesLoaded >= imageCount) { $('#zoom_inner').children().show(); } }); setTimeout( function() { $('#zoom_inner').children().show() }, 5000 ); } });

    Read the article

  • file names based on file content

    - by Mark
    So iow, some algorithm to generate a unique, reasonable length filename based on binary file content. Two files that have the same binary content should have the same name. Obviously there would be limits to this, as presumably you couldn't have unique reasonable length filenames for each of a large set of large files only differing at a handful of bit positions. But presumably there is some heuristic, best approximation to this that for example exploits known attributes of typical image files. If I had the name of some algorithm that does this I can google it and find other approaches as well.

    Read the article

  • Changing CCK content-types details results in numerous DB calls for the menu system

    - by Paul Strugger
    Every time I make a change in the details of a content-type it takes too long. I though it had to do with the fact that I had too many content-types and fields (~500), but when I load the devel module to see the queries that take that long I see: Executed 32212 queries in 12267.57 milliseconds. Queries taking longer than 5 ms and queries executed more than once, are highlighted. Page execution time was 55763.32 ms When I see the details I notice that the vast majority of db calls come from the menu system, e.g.: _menu_route menu_local_tasks admin_menu_link_save Why is that? Can I avoid some of these? It doesn't seem logical!

    Read the article

  • Prevent "jQuery( html )" from triggering the browser to request images and other referenced content

    - by Chris Dutrow
    Using jQuery to create new DOM elements from text. Example: jQuery('<div><img src="/some_image.gif"></img></div>'); When this statement is executed, it causes the browser to request the file 'some_img.gif' from the server. Is there a way to execute this statement so that the resulting jQuery object can be used from DOM traversal, but not actually cause the browser to hit the server with requests for images and other referenced content? Example: var jquery_elememnts = jQuery('<div><img class="a_class" src="/some_image.gif"></img></div>'); var img_class = jquery_elememnts.find('img').attr('class'); The only idea I have now is to use regex to remove all of the 'src' tags from image elements and other things that will trigger the browser requests before using jQuery to evaluate the HTML. How can jQuery be used to evaluate HTML without triggering the browser to make requests to the server for referenced content inside the evaluated HTML? Thanks!

    Read the article

  • How to make content take up 100% of height and width

    - by Hiro2k
    I'm so close but I can't get this to work like I want it. I'm trying to get the header and the menu to always be visible and have the content take up the rest of the view screen and have it's own scrollbar when it overflows. The problem is that the width of the content isn't being stretched to the right and I get a scroll bar in the middle of my page. I also can't get it to take up the rest of the remaining window height, if I set the height to 100% it wants to use the whole window height instead of what is left. I'm only working with IE7 or better so need to worry about javascript and am not averse to using jQuery if it can solve this problem! http://pastebin.com/x31mGtXr

    Read the article

  • Weird Network Behavior of Home Router

    - by Stilgar
    First of all I would like to apologize because what you are going to read will be long and confusing but I am fighting this issue for 3 days now and am out of ideas. At home I have the following setup 50Mbps Internet connects into a home router A 2 desktop computers connect to router A via standard FTP LAN cables including one where the cable is ~20m long. a second router B connects to router A via standard FTP LAN cable X (~20m long). several devices connect to the wireless network of router B and there are a couple of desktop computers connected to it through FTP LAN cables. For some reason computers connected to router B when it is connected via cable X have very slow Internet connection. It is like 5 times slower than what is expected. This is the actual problem I am trying to solve. Interesting facts If a computer is connected to cable X directly instead of through router B the Internet speed is just fine (up to the 50Mbps I get from the ISP). Tested with two computers. I have tried replacing router B with another router C and the problem persists. If I connect router B via another cable to the same ports with the same settings everything seems to work fine and computers connected to router B have quite fast Internet I have tested mainly via Speedtest.net but I have also achieved similar speeds when downloading a file The upload speed is quite higher than the download speed in all cases. Note that my ISP usually has higher upload speed (unless it manages to hit the 50Mbps cap) It seems like the speed when connecting through router B with cable X is reduced 4-5 times no matter what the original speed is. For example via router B I get 10Mbps speed to local servers where I get 50Mbps when connected on router A. If I use a distant server where the ISP is only able to provide 25Mbps I get 4-5Mbps on router B. WiFi is slower than LAN on both routers (which is normal) but the reduced speed is reduced proportionally for WiFi. In addition the upload speed is normally higher from the ISP and it is also reduced proportionally. I have tried two different network configurations. One where I have NAT behind NAT where router B connects to router A via the WAN port and has its own DHCP. Second where router B connects to router A via standard LAN port and has DHCP disabled. In this configuration router B serves as a switch and the Network Gateway for computers connected to router B is the internal IP address of router A. Both configurations work just fine but both manifest the reduced speed issue. pings seem to work just fine As far as I can tell none of the cables is crossed The RJ45 setup for cable X orange orange-white brown brow-white blue blue-white green green-white This is a big problem for me since cable X passes through walls and floors and is very hard to replace. I also may have gotten some of the facts wrong because I am almost going crazy with this issue and testing includes going several floors up and down the staircase. One hypothesis I came up with is that the cable is defective in such a way that the voltage from the router affects its performance. When it is connected to a computer it performs just fine but the router has less power. Related hypothesis includes the cable being affected by electricity cables in the walls when the voltage is low. (I know nothing about electricity) So any ideas what to do, what to test or what the issue may be?

    Read the article

  • Does this prove a network bandwidth bottleneck?

    - by Yuji Tomita
    I've incorrectly assumed that my internal AB testing means my server can handle 1k concurrency @3k hits per second. My theory at at the moment is that the network is the bottleneck. The server can't send enough data fast enough. External testing from blitz.io at 1k concurrency shows my hits/s capping off at 180, with pages taking longer and longer to respond as the server is only able to return 180 per second. I've served a blank file from nginx and benched it: it scales 1:1 with concurrency. Now to rule out IO / memcached bottlenecks (nginx normally pulls from memcached), I serve up a static version of the cached page from the filesystem. The results are very similar to my original test; I'm capped at around 180 RPS. Splitting the HTML page in half gives me double the RPS, so it's definitely limited by the size of the page. If I internally ApacheBench from the local server, I get consistent results of around 4k RPS on both the Full Page and the Half Page, at high transfer rates. Transfer rate: 62586.14 [Kbytes/sec] received If I AB from an external server, I get around 180RPS - same as the blitz.io results. How do I know it's not intentional throttling? If I benchmark from multiple external servers, all results become poor which leads me to believe the problem is in MY servers outbound traffic, not a download speed issue with my benchmarking servers / blitz.io. So I'm back to my conclusion that my server can't send data fast enough. Am I right? Are there other ways to interpret this data? Is the solution/optimization to set up multiple servers + load balancing that can each serve 180 hits per second? I'm quite new to server optimization, so I'd appreciate any confirmation interpreting this data. Outbound traffic Here's more information about the outbound bandwidth: The network graph shows a maximum output of 16 Mb/s: 16 megabits per second. Doesn't sound like much at all. Due to a suggestion about throttling, I looked into this and found that linode has a 50mbps cap (which I'm not even close to hitting, apparently). I had it raised to 100mbps. Since linode caps my traffic, and I'm not even hitting it, does this mean that my server should indeed be capable of outputting up to 100mbps but is limited by some other internal bottleneck? I just don't understand how networks at this large of a scale work; can they literally send data as fast as they can read from the HDD? Is the network pipe that big? In conclusion 1: Based on the above, I'm thinking I can definitely raise my 180RPS by adding an nginx load balancer on top of a multi nginx server setup at exactly 180RPS per server behind the LB. 2: If linode has a 50/100mbit limit that I'm not hitting at all, there must be something I can do to hit that limit with my single server setup. If I can read / transmit data fast enough locally, and linode even bothers to have a 50mbit/100mbit cap, there must be an internal bottleneck that's not allowing me to hit those caps that I'm not sure how to detect. Correct? I realize the question is huge and vague now, but I'm not sure how to condense it. Any input is appreciated on any conclusion I've made.

    Read the article

  • Troubleshooting Website problems within the local network

    - by HaydnWVN
    Have an external website which opens fine on some PC's, yet seems to time out (or symptoms of timing out, but never actually does) on others. Seems to only affect (some) of our newer HP Pro 3305 MT Workstations. All of which are running Win7 32bit SP1 with all updates. Older PC's (Win7 32bit SP1 & WinXP) are unaffected. Using Google Chrome & Firefox makes no difference. Opening the website in IE9 Compatibility Mode has exactly the same symptoms. All PC's are on the same local network (Workgroup) using the same DNS server & gateway (inhouse) on the same internet connection, on the same subnet. There is no proxy server, no content filtering, no load balancing etc etc. Only group policy in effect (locally) is for Update scheduling. Local firewalls are all the same (Kaspersky WP4) and our external facing firewall has no IP specific settings. I have no control over the external website, traceroute shows the same destination on all PC's. It is a fairly popular website in our industry (Horticulture) and i'm not aware of any other people (even other sites within our sister companies) with the same problem. Update: Used Fiddler2 to monitor the HTTP request, seems its not getting fulfilled for some reason?! Request sent: GET http://www.rhs.org.uk/ HTTP/1.1 Host: www.rhs.org.uk Connection: keep-alive User-Agent: Mozilla/5.0 (Windows NT 6.1) AppleWebKit/536.11 (KHTML, like Gecko) Chrome/20.0.1132.47 Safari/536.11 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Encoding: gzip,deflate,sdch Accept-Language: en-GB,en-US;q=0.8,en;q=0.6 Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 Log from Fiddler 2 of the request: This session is not yet complete. Press F5 to refresh when session is complete for updated statistics. Request Count: 1 Bytes Sent: 567 (headers:567; body:0) Bytes Received: 0 (headers:0; body:0) ACTUAL PERFORMANCE -------------- ClientConnected: 17:02:33.720 ClientBeginRequest: 17:02:39.118 GotRequestHeaders: 17:02:39.118 ClientDoneRequest: 17:02:39.118 Determine Gateway: 0ms DNS Lookup: 0ms TCP/IP Connect: 46ms HTTPS Handshake: 0ms ServerConnected: 17:02:39.165 FiddlerBeginRequest: 17:02:39.165 ServerGotRequest: 17:02:39.165 ServerBeginResponse: 00:00:00.000 GotResponseHeaders: 00:00:00.000 ServerDoneResponse: 00:00:00.000 ClientBeginResponse: 00:00:00.000 ClientDoneResponse: 00:00:00.000 RESPONSE BYTES (by Content-Type) -------------- ~headers~: 0 Log of a successful request from a working PC (done this morning, excuse the timestamps being different from above): Request Count: 1 Bytes Sent: 493 (headers:493; body:0) Bytes Received: 20,413 (headers:525; body:19,888) ACTUAL PERFORMANCE -------------- ClientConnected: 08:22:47.766 ClientBeginRequest: 08:22:47.766 GotRequestHeaders: 08:22:47.766 ClientDoneRequest: 08:22:47.766 Determine Gateway: 0ms DNS Lookup: 26ms TCP/IP Connect: 30ms HTTPS Handshake: 0ms ServerConnected: 08:22:47.828 FiddlerBeginRequest: 08:22:47.828 ServerGotRequest: 08:22:47.828 ServerBeginResponse: 08:22:48.905 GotResponseHeaders: 08:22:48.905 ServerDoneResponse: 08:22:48.905 ClientBeginResponse: 08:22:48.905 ClientDoneResponse: 08:22:48.905 Overall Elapsed: 00:00:01.1388020 RESPONSE BYTES (by Content-Type) -------------- text/html: 19,888 ~headers~: 525 So my question has evolved into: What is the difference between the 2 requests and how do I determine why 1 PC is not getting a reply to it's GET request?

    Read the article

  • What to filter when providing very limited open WiFi to a small conference or meeting?

    - by Tim Farley
    Executive Summary The basic question is: if you have a very limited bandwidth WiFi to provide Internet for a small meeting of only a day or two, how do you set the filters on the router to avoid one or two users monopolizing all the available bandwidth? For folks who don't have the time to read the details below, I am NOT looking for any of these answers: Secure the router and only let a few trusted people use it Tell everyone to turn off unused services & generally police themselves Monitor the traffic with a sniffer and add filters as needed I am aware of all of that. None are appropriate for reasons that will become clear. ALSO NOTE: There is already a question concerning providing adequate WiFi at large (500 attendees) conferences here. This question concerns SMALL meetings of less than 200 people, typically with less than half that using the WiFi. Something that can be handled with a single home or small office router. Background I've used a 3G/4G router device to provide WiFi to small meetings in the past with some success. By small I mean single-room conferences or meetings on the order of a barcamp or Skepticamp or user group meeting. These meetings sometimes have technical attendees there, but not exclusively. Usually less than half to a third of the attendees will actually use the WiFi. Maximum meeting size I'm talking about is 100 to 200 people. I typically use a Cradlepoint MBR-1000 but many other devices exist, especially all-in-one units supplied by 3G and/or 4G vendors like Verizon, Sprint and Clear. These devices take a 3G or 4G internet connection and fan it out to multiple users using WiFi. One key aspect of providing net access this way is the limited bandwidth available over 3G/4G. Even with something like the Cradlepoint which can load-balance multiple radios, you are only going to achieve a few megabits of download speed and maybe a megabit or so of upload speed. That's a best case scenario. Often it is considerably slower. The goal in most of these meeting situations is to allow folks access to services like email, web, social media, chat services and so on. This is so they can live-blog or live-tweet the proceedings, or simply chat online or otherwise stay in touch (with both attendees and non-attendees) while the meeting proceeds. I would like to limit the services provided by the router to just those services that meet those needs. Problems In particular I have noticed a couple of scenarios where particular users end up abusing most of the bandwidth on the router, to the detriment of everyone. These boil into two areas: Intentional use. Folks looking at YouTube videos, downloading podcasts to their iPod, and otherwise using the bandwidth for things that really aren't appropriate in a meeting room where you should be paying attention to the speaker and/or interacting.At one meeting that we were live-streaming (over a separate, dedicated connection) via UStream, I noticed several folks in the room that had the UStream page up so they could interact with the meeting chat - apparently oblivious that they were wasting bandwidth streaming back video of something that was taking place right in front of them. Unintentional use. There are a variety of software utilities that will make extensive use of bandwidth in the background, that folks often have installed on their laptops and smartphones, perhaps without realizing.Examples: Peer to peer downloading programs such as Bittorrent that run in the background Automatic software update services. These are legion, as every major software vendor has their own, so one can easily have Microsoft, Apple, Mozilla, Adobe, Google and others all trying to download updates in the background. Security software that downloads new signatures such as anti-virus, anti-malware, etc. Backup software and other software that "syncs" in the background to cloud services. For some numbers on how much network bandwidth gets sucked up by these non-web, non-email type services, check out this recent Wired article. Apparently web, email and chat all together are less than one quarter of the Internet traffic now. If the numbers in that article are correct, by filtering out all the other stuff I should be able to increase the usefulness of the WiFi four-fold. Now, in some situations I've been able to control access using security on the router to limit it to a very small group of people (typically the organizers of the meeting). But that's not always appropriate. At an upcoming meeting I would like to run the WiFi without security and let anyone use it, because it happens at the meeting location the 4G coverage in my town is particularly excellent. In a recent test I got 10 Megabits down at the meeting site. The "tell people to police themselves" solution mentioned at top is not appropriate because of (a) a largely non-technical audience and (b) the unintentional nature of much of the usage as described above. The "run a sniffer and filter as needed" solution is not useful because these meetings typically only last a couple of days, often only one day, and have a very small volunteer staff. I don't have a person to dedicate to network monitoring, and by the time we got the rules tweaked completely the meeting will be over. What I've Got First thing, I figured I would use OpenDNS's domain filtering rules to filter out whole classes of sites. A number of video and peer-to-peer sites can be wiped out using this. (Yes, I am aware that filtering via DNS technically leaves the services accessible - remember, these are largely non-technical users attending a 2 day meeting. It's enough). I figured I would start with these selections in OpenDNS's UI: I figure I will probably also block DNS (port 53) to anything other than the router itself, so that folks can't bypass my DNS configuration. A savvy user could get around this, because I'm not going to put a lot of elaborate filters on the firewall, but I don't care too much. Because these meetings don't last very long, its probably not going to be worth the trouble. This should cover the bulk of the non-web traffic, i.e. peer-to-peer and video if that Wired article is correct. Please advise if you think there are severe limitations to the OpenDNS approach. What I Need Note that OpenDNS focuses on things that are "objectionable" in some context or another. Video, music, radio and peer-to-peer all get covered. I still need to cover a number of perfectly reasonable things that we just want to block because they aren't needed in a meeting. Most of these are utilities that upload or download legit things in the background. Specifically, I'd like to know port numbers or DNS names to filter in order to effectively disable the following services: Microsoft automatic updates Apple automatic updates Adobe automatic updates Google automatic updates Other major software update services Major virus/malware/security signature updates Major background backup services Other services that run in the background and can eat lots of bandwidth I also would like any other suggestions you might have that would be applicable. Sorry to be so verbose, but I find it helps to be very, very clear on questions of this nature, and I already have half a solution with the OpenDNS thing.

    Read the article

  • Good C# Networking Book

    - by Dan
    Hey guys I am looking for a good solid introduction book to the fundementals of network programming in C#. For example is have looked at this one http://www.amazon.com/C-Network-Programming-Richard-Blum/dp/0782141765/ref=pd_sim_b_5 but it is quite old now. Anyone used one recently, i would greatly appriciate it thanks dan

    Read the article

  • Per Application Packet Analyzer

    - by Anindya Chatterjee
    Is there any tool which can analyze network traffic per application? Wireshark does not have per application filtering, fiddler also does not give proper logging for any application. So can anyone please help me out to find an app which can analyze network traffic originating from a random application and log the traffic for that particular application only?

    Read the article

  • What is the iPhone simulator IP address?

    - by Chris G
    Hi I have been looking for the answer to this question for some time. I am doing network programming for the iPhone and it is necessary for me to use the IP address of the device. This isn't a problem on the physical device as it has its own IP address on the network. However I was wondering what was the case with it on the simulator. Does it get assigned an IP address to be used? Thanks in advanced for any help, CG

    Read the article

  • Account sharing among Ubuntu machines

    - by muckabout
    I'd like a simple and secure system to have allow users in our network to have their account (e.g., 'myname') work on every machine in the network (e.g., such that they could ssh to any machine and have the same userid, mounted smb share). Any suggestions?

    Read the article

  • C# - NetworkChangeEventHandler

    - by Andy
    I have small application which catches Network Availability change and its working very fine in client Desktop m/c (which is having XP) But when I tested the same in Vista by disabling the network and enabling it again..the event is not getting triggered. NetworkChange.NetworkAvailabilityChanged += new NetworkAvailabilityChangedEventHandler(NetworkChange_NetworkAvailabilityChanged); private void NetworkChange_NetworkAvailabilityChanged(object sender, NetworkAvailabilityEventArgs e) {.....} Does .Net framework 3.5 got any new solution introduced ...

    Read the article

  • Activation Function, Initializer function, etc, effects on neural networks for face detection

    - by harry
    There's various activation functions: sigmoid, tanh, etc. And there's also a few initializer functions: Nguyen and Widrow, random, normalized, constant, zero, etc. So do these have much effect on the outcome of a neural network specialising in face detection? Right now I'm using the Tanh activation function and just randomising all the weights from -0.5 to 0.5. I have no idea if this is the best approach though, and with 4 hours to train the network each time, I'd rather ask on here than experiment!

    Read the article

< Previous Page | 157 158 159 160 161 162 163 164 165 166 167 168  | Next Page >