Search Results

Search found 3387 results on 136 pages for 'n layer'.

Page 129/136 | < Previous Page | 125 126 127 128 129 130 131 132 133 134 135 136  | Next Page >

  • Windows 7 Machine Makes Router Drop -All- Wireless Connections

    - by Hammer Bro.
    Some background: My home network consists of my Desktop, a two-month old Windows 7 (x64) machine which is online most frequently (N-spec), as well as three other Windows XP laptops (all G) that only connect every now and then (one for work, one for Netflix, and the other for infrequent regular laptop uses). I used to have a Belkin F5D8236-4 wireless router, and everything worked great. A week ago, however, I found out that the Belkin absolutely in no way would establish a VPN connection, something that has become important for work. So I bought a Netgear WNR3500v2/U/L. The wireless was acting a little sketchy at first for just the Windows 7 machine, but I thought it had something to do with 802.11N and I was in a hurry so I just fished up an ethernet cable and disabled the computer's wireless. It has now become apparent, though, that whenever the Windows 7 machine is connected to the router, all wireless connections become unstable. I was using my work laptop for a solid six hours today with no trouble, having multiple SSH connections open over VPN and streaming internet radio in the background. Then, within two minutes of turning on this Windows 7 box, I had lost all connectivity over the wireless. And I was two feet away from the router. The same sort of thing happens on all of the other laptops -- Netflix can be playing stuff all weekend, but if I come up here and do things on this (W7) computer, the streaming will be dead within ten minutes. So here are my basic observations: If the Windows 7 machine is off, then all connections will have a Signal Strength of Very Good or Excellent and a Speed of 48-54 Mbps for an indefinite amount of time. Shortly after the Windows 7 machine is turned on, all wireless connections will experience a consistent decline in Speed down to 1.0 Mbps, eventually losing their connection entirely. These machines will continue to maintain 70% signal strength, as observed by themselves and router. Once dropped, a wireless connection will have difficulty reconnecting. And, if a connection manages to become established, it will quickly drop off again. The Windows 7 machine itself will continue to function just fine if it's using a wired connection, although it will experience these same issues over the wireless. All of the drivers and firmwares are up to date, and this happened both with the stock Netgear firmware as well as the (current) DD-WRT. What I've tried: Making sure each computer is being assigned a distinct IP. (They are.) Disabling UPnP and Stateful Packet Inspection on the router. Disabling Network Sharing, SSDP Discovery, TCP/IP NetBios Helper and Computer Browser services on the Windows 7 machine. Disabling QoS Packet Scheduler, IPv6, and Link Layer Topology Discovery options on my ethernet controller (leaving only Client for Microsoft Networks, File and Printer Sharing, and IPv4 enabled). What I think: It seems awfully similar to the problems discussed in detail at http://social.msdn.microsoft.com/Forums/en/wsk/thread/1064e397-9d9b-4ae2-bc8e-c8798e591915 (which was both the most relevant and concrete information I could dig up on the internet). I still think that something the Windows 7 IP stack (or just Operating System itself) is doing is giving the router fits. However, I could be wrong, because I have two key differences. One is that most instances of this problem are reported as the entire router dying or restarting, and mine still works just fine over the wired connection. The other is that it's a new router, tested with both the factory firmware and the (I assume) well-maintained DD-WRT project. Even if Windows 7 is still secretly sending IPv6 packets or the TCP Window Scaling implementation that I hear Vista caused some trouble with (even though I've tried my best to disable anything fancy), this router should support those functions. I don't want to get a new or a replacement router unless someone can convince me that this is a defective unit. But the problem seems too specific and predictable by my instincts to be a hardware hiccup. And I don't want to deal with the inevitable problems that always seem to take half a day to resolve when getting a new router, since I'm frantically working (including tomorrow) to complete a project by next week's deadline. Plus, I think in the worst case scenario, I could keep this router connected directly to the modem, disable its wireless entirely, and connect the old Belkin to it directly. That should allow me to still use VPN (although I'll have to plug my work laptop directly into that router), and then maintain wireless connections for all of the other computers. But that feels so wrong to me. Anyone have any ideas what the cause and possible solution could be? Clarifications: The Windows 7 machine is directly connected via an ethernet cable to the router for everything above. But while it is online, all other computers' wireless connections become unusable. It is not an issue of signal strength or interference -- no other devices within scanning range are using Channel 1, and the problem will affect computers that are literally feet away from the router with 95% signal strength.

    Read the article

  • Resons why I'm using php rather then asp.net [closed]

    - by spirytus
    I have basic idea of how asp .Net works, but finds all framework hard to use if you are a newbie. I found compiling, web applications vs websites and all that stuff you should know to program in asp .net a bit tedious and so personally I go with php to create small, to medium applications for my clients. There are couple of reasons for it: php is easy scripting language, top to bottom and you done. You still can create objects, classes and if you have idea of MVC its fairly easy to create basic structure yourself so you can keep you presentation layer "relatively" clean. Although I find myself still mixing basic logic in my view's, I am trying to stick to booleans, and for each loops. ASP .net keeps it cleaner as far as I know and I agree that this is great. Heaps of free stuff for php and lots of help everywhere Although choice of IDE's for php is very limited, I still don't have to be stuck with VisualStudio. Lets be honest.. you can program in whatever you like but does anyone uses anything else other than VS? For basic applications I create, Visual Studio doesn't come even close to notepad :) / phpEdit (or similar) combination. It lacks of many features I constantly use, although armies of developers are using it and it must be for good reason. Personally not a big fan of VS though. Being on the market for that long should make editing much easier. I know .Net comes with awesome set of controls, validators etc. which is truly awesome. For me the problem starts if I want my validator to behave slightly different way and lets say fade in/out error messages. I know its possible to extend it behavior, plug into lifecycle and output different JS to the client and so on. I just never see it happen in places I work, and honestly, I don't even think most of .net developers I worked with during last couple of years would know how to do that. In php I have to grab some plugin for jQuery and use it for validation, which is fairly easy task once you had done it before. Again I'm sure its easy for .net gurus, but for newbie like me its almost impossible. I found that many asp .net programmers are very limited in what they are able to do and basically whack together .net applications using same lame set of controls, not even bothering in looking into how it works and what if? Now I don't want to anger anyone :) I know there is huge number of excellent .Net developers who know their stuff and are able to extend controls and do all that magic in no time. I found it a bit annoying though that many of them stick to what is provided without even trying to make it better. Nothing against .net here, just a thought really :) I remember when asp.net came out the idea was that front-end people will not be able to screw anything up and do their fron-end stuff without worrying what happens behind. Now its never that easy and I always tend to get server side people to fix this and that during development. Things like ID's assigned to controls can very easily make your application break and if someone is pure HTML guy using VS its easy to break something. Thats my thoughs on php and .net and reasons why for my work I go with php. I know that once learned asp .net is awesome technology and summing all up php doesn't even come close to it. For someone like me however, individually developing small basic applications for clients, php seems to work much better. Please let me know your thoughts on the above :)

    Read the article

  • Load-balancing between a Procurve switch and a server

    - by vlad
    Hello I've been searching around the web for this problem i've been having. It's similar in a way to this question: How exactly & specifically does layer 3 LACP destination address hashing work? My setup is as follows: I have a central switch, a Procurve 2510G-24, image version Y.11.16. It's the center of a star topology, there are four switches connected to it via a single gigabit link. Those switches service the users. On the central switch, I have a server with two gigabit interfaces that I want to bond together in order to achieve higher throughput, and two other servers that have single gigabit connections to the switch. The topology looks as follows: sw1 sw2 sw3 sw4 | | | | --------------------- | sw0 | --------------------- || | | srv1 srv2 srv3 The servers were running FreeBSD 8.1. On srv1 I set up a lagg interface using the lacp protocol, and on the switch I set up a trunk for the two ports using lacp as well. The switch showed that the server was a lacp partner, I could ping the server from another computer, and the server could ping other computers. If I unplugged one of the cables, the connection would keep working, so everything looked fine. Until I tested throughput. There was only one link used between srv1 and sw0. All testing was conducted with iperf, and load distribution was checked with systat -ifstat. I was looking to test the load balancing for both receive and send operations, as I want this server to be a file server. There were therefore two scenarios: iperf -s on srv1 and iperf -c on the other servers iperf -s on the other servers and iperf -c on srv1 connected to all the other servers. Every time only one link was used. If one cable was unplugged, the connections would keep going. However, once the cable was plugged back in, the load was not distributed. Each and every server is able to fill the gigabit link. In one-to-one test scenarios, iperf was reporting around 940Mbps. The CPU usage was around 20%, which means that the servers could withstand a doubling of the throughput. srv1 is a dell poweredge sc1425 with onboard intel 82541GI nics (em driver on freebsd). After troubleshooting a previous problem with vlan tagging on top of a lagg interface, it turned out that the em could not support this. So I figured that maybe something else is wrong with the em drivers and / or lagg stack, so I started up backtrack 4r2 on this same server. So srv1 now uses linux kernel 2.6.35.8. I set up a bonding interface bond0. The kernel module was loaded with option mode=4 in order to get lacp. The switch was happy with the link, I could ping to and from the server. I could even put vlans on top of the bonding interface. However, only half the problem was solved: if I used srv1 as a client to the other servers, iperf was reporting around 940Mbps for each connection, and bwm-ng showed, of course, a nice distribution of the load between the two nics; if I run the iperf server on srv1 and tried to connect with the other servers, there was no load balancing. I thought that maybe I was out of luck and the hashes for the two mac addresses of the clients were the same, so I brought in two new servers and tested with the four of them at the same time, and still nothing changed. I tried disabling and reenabling one of the links, and all that happened was the traffic switched from one link to the other and back to the first again. I also tried setting the trunk to "plain trunk mode" on the switch, and experimented with other bonding modes (roundrobin, xor, alb, tlb) but I never saw any traffic distribution. One interesting thing, though: one of the four switches is a Cisco 2950, image version 12.1(22)EA7. It has 48 10/100 ports and 2 gigabit uplinks. I have a server (call it srv4) with a 4 channel trunk connected to it (4x100), FreeBSD 8.0 release. The switch is connected to sw0 via gigabit. If I set up an iperf server on one of the servers connected to sw0 and a client on srv4, ALL 4 links are used, and iperf reports around 330Mbps. systat -ifstat shows all four interfaces are used. The cisco port-channel uses src-mac to balance the load. The HP should use both the source and destination according to the manual, so it should work as well. Could this mean there is some bug in the HP firmware? Am I doing something wrong?

    Read the article

  • Resilient backpropagation neural network - question about gradient

    - by Raf
    Hello Guys, First I want to say that I'm really new to neural networks and I don't understand it very good ;) I've made my first C# implementation of the backpropagation neural network. I've tested it using XOR and it looks it work. Now I would like change my implementation to use resilient backpropagation (Rprop - http://en.wikipedia.org/wiki/Rprop). The definition says: "Rprop takes into account only the sign of the partial derivative over all patterns (not the magnitude), and acts independently on each "weight". Could somebody tell me what partial derivative over all patterns is? And how should I compute this partial derivative for a neuron in hidden layer. Thanks a lot UPDATE: My implementation base on this Java code: www_.dia.fi.upm.es/~jamartin/downloads/bpnn.java My backPropagate method looks like this: public double backPropagate(double[] targets) { double error, change; // calculate error terms for output double[] output_deltas = new double[outputsNumber]; for (int k = 0; k < outputsNumber; k++) { error = targets[k] - activationsOutputs[k]; output_deltas[k] = Dsigmoid(activationsOutputs[k]) * error; } // calculate error terms for hidden double[] hidden_deltas = new double[hiddenNumber]; for (int j = 0; j < hiddenNumber; j++) { error = 0.0; for (int k = 0; k < outputsNumber; k++) { error = error + output_deltas[k] * weightsOutputs[j, k]; } hidden_deltas[j] = Dsigmoid(activationsHidden[j]) * error; } //update output weights for (int j = 0; j < hiddenNumber; j++) { for (int k = 0; k < outputsNumber; k++) { change = output_deltas[k] * activationsHidden[j]; weightsOutputs[j, k] = weightsOutputs[j, k] + learningRate * change + momentumFactor * lastChangeWeightsForMomentumOutpus[j, k]; lastChangeWeightsForMomentumOutpus[j, k] = change; } } // update input weights for (int i = 0; i < inputsNumber; i++) { for (int j = 0; j < hiddenNumber; j++) { change = hidden_deltas[j] * activationsInputs[i]; weightsInputs[i, j] = weightsInputs[i, j] + learningRate * change + momentumFactor * lastChangeWeightsForMomentumInputs[i, j]; lastChangeWeightsForMomentumInputs[i, j] = change; } } // calculate error error = 0.0; for (int k = 0; k < outputsNumber; k++) { error = error + 0.5 * (targets[k] - activationsOutputs[k]) * (targets[k] - activationsOutputs[k]); } return error; } So can I use change = hidden_deltas[j] * activationsInputs[i] variable as a gradient (partial derivative) for checking the sing?

    Read the article

  • Problem with custom paging in ASP.NET

    - by JohnCC
    I'm trying to add custom paging to my site using the ObjectDataSource paging. I believe I've correctly added the stored procedures I need, and brought them up through the DAL and BLL. The problem I have is that when I try to use it on a page, I get an empty datagrid. <%@ Page Language="C#" AutoEventWireup="true" CodeFile="PageTest.aspx.cs" Inherits="developer_PageTest" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head runat="server"> <title></title> </head> <body> <form id="form1" runat="server"> <div> <asp:ObjectDataSource ID="ObjectDataSource1" SelectMethod="GetMessagesPaged" EnablePaging="true" SelectCountMethod="GetMessagesCount" TypeName="MessageTable" runat="server" > <SelectParameters> <asp:Parameter Name="DeviceID" Type="Int32" DefaultValue="112" /> <asp:Parameter Name="StartDate" Type="DateTime" DefaultValue="" ConvertEmptyStringToNull="true"/> <asp:Parameter Name="EndDate" Type="DateTime" DefaultValue="" ConvertEmptyStringToNull="true"/> <asp:Parameter Name="BasicMatch" Type="Boolean" ConvertEmptyStringToNull="true" DefaultValue="" /> <asp:Parameter Name="ContainsPosition" Type="Boolean" ConvertEmptyStringToNull="true" DefaultValue="" /> <asp:Parameter Name="Decoded" Type="Boolean" ConvertEmptyStringToNull="true" DefaultValue="" /> <%-- <asp:Parameter Name="StartRowIndex" Type="Int32" DefaultValue="10" /> <asp:Parameter Name="MaximumRows" Type="Int32" DefaultValue="10" /> --%> </SelectParameters> </asp:ObjectDataSource> <asp:GridView ID="GridView1" runat="server" DataSourceID="ObjectDataSource1" AllowPaging="true" PageSize="10"></asp:GridView> <br /> <asp:Label runat="server" ID="lblCount"></asp:Label> </div> </form> </body> </html> When I set EnablePaging to false on the ODS, and add the commented out StartRowIndex and MaximumRows params in the markup, I get data so it really seems like the data layer is behaving as it should. There's code in code file to put the value of the GetMessagesCount call in the lblCount, and that always has a sensible value in it. I've tried breaking in the BLL and stepping through, and the backend is getting called, and it is returning what looks like the right information and data, but somehow between the ODS and the GridView it's vanishing. I created a mock data source which returned numbered rows of random numbers and attached it to this form, and the custom paging worked so I think my understanding of the technique is good. I just can't see why it fails here! Any help really appreciated. (EDIT .. here's the code behind). using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.UI; using System.Web.UI.WebControls; using System.ComponentModel; public partial class developer_PageTest : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { lblCount.Text = String.Format("Count = {0}", MessageTable.GetMessagesCount(112, null, null, null, null, null)) } }

    Read the article

  • multiple-inheritance substitution

    - by Luigi
    I want to write a module (framework specific), that would wrap and extend Facebook PHP-sdk (https://github.com/facebook/php-sdk/). My problem is - how to organize classes, in a nice way. So getting into details - Facebook PHP-sdk consists of two classes: BaseFacebook - abstract class with all the stuff sdk does Facebook - extends BaseFacebook, and implements parent abstract persistance-related methods with default session usage Now I have some functionality to add: Facebook class substitution, integrated with framework session class shorthand methods, that run api calls, I use mostly (through BaseFacebook::api()), authorization methods, so i don't have to rewrite this logic every time, configuration, sucked up from framework classes, insted of passed as params caching, integrated with framework cache module I know something has gone very wrong, because I have too much inheritance that doesn't look very normal.Wrapping everything in one "complex extension" class also seems too much. I think I should have few working togheter classes - but i get into problems like: if cache class doesn't really extend and override BaseFacebook::api() method - shorthand and authentication classes won't be able to use the caching. Maybe some kind of a pattern would be right in here? How would you organize these classes and their dependencies? EDIT 04.07.2012 Bits of code, related to the topic: This is how the base class of Facebook PHP-sdk: abstract class BaseFacebook { // ... some methods public function api(/* polymorphic */) { // ... method, that makes api calls } public function getUser() { // ... tries to get user id from session } // ... other methods abstract protected function setPersistentData($key, $value); abstract protected function getPersistentData($key, $default = false); // ... few more abstract methods } Normaly Facebook class extends it, and impelements those abstract methods. I replaced it with my substitude - Facebook_Session class: class Facebook_Session extends BaseFacebook { protected function setPersistentData($key, $value) { // ... method body } protected function getPersistentData($key, $default = false) { // ... method body } // ... implementation of other abstract functions from BaseFacebook } Ok, then I extend this more with shorthand methods and configuration variables: class Facebook_Custom extends Facebook_Session { public funtion __construct() { // ... call parent's constructor with parameters from framework config } public function api_batch() { // ... a wrapper for parent's api() method return $this->api('/?batch=' . json_encode($calls), 'POST'); } public function redirect_to_auth_dialog() { // method body } // ... more methods like this, for common queries / authorization } I'm not sure, if this isn't too much for a single class ( authorization / shorthand methods / configuration). Then there comes another extending layer - cache: class Facebook_Cache extends Facebook_Custom { public function api() { $cache_file_identifier = $this->getUser(); if(/* cache_file_identifier is not null and found a valid file with cached query result */) { // return the result } else { try { // call Facebook_Custom::api, cache and return the result } catch(FacebookApiException $e) { // if Access Token is expired force refreshing it parent::redirect_to_auth_dialog(); } } } // .. some other stuff related to caching } Now this pretty much works. New instance of Facebook_Cache gives me all the functionality. Shorthand methods from Facebook_Custom use caching, because Facebook_Cache overwrited api() method. But here is what is bothering me: I think it's too much inheritance. It's all very tight coupled - like look how i had to specify 'Facebook_Custom::api' instead of 'parent:api', to avoid api() method loop on Facebook_Cache class extending. Overall mess and ugliness. So again, this works but I'm just asking about patterns / ways of doing this in a cleaner and smarter way.

    Read the article

  • Unicast traffic between hosts on a switch leaving the switch by its uplink. Why?

    - by Rich Lafferty
    I have a weird thing happening on our network at my office which I can't quite get my head around. In particular I can't tell if it's a problem with a switch, or a problem with configuration. We have a Cisco SG300-52 switch (sw01) in the top of a rack in our server room, connected to another SG300-28 that acts as our core switch (core01). Both run layer 2 only, our firewalls do routing between VLANs. They have a dozen or so VLANs between them. Gi1 on sw01 is a trunk port connected to gi1 on core01. (Disclosure: There are other switches in our environment but I'm pretty sure I've isolated the problem down to these two. Happy to provide more info if necessary.) The behaviour I'm seeing is limited to one VLAN, vlan 12 -- or, at least, it's not happening on the other ones I checked (It's hard to guarantee the absence of packets), and it is: sw01 is forwarding, to core01, traffic which is between two hosts which are both plugged into sw01. (I noticed this because the IDS in our firewall gave a false positive on traffic which should not reach the firewall.) We noticed this mostly between our two dhcp/dns servers, net01 (10.12.0.10) and net02 (10.12.0.11). net01 is physical hardware and net02 is on a VMware ESX server. net01 is connected to gi44 on sw01 and net02's ESX server to gi11. [net01]----gi44-[sw01]-gi1----gi1-[core01] [net02]----gi11/ Let's see some interfaces! Remember, vlan 12 is the problem vlan. Of the others I explicitly verified that vlan 27 was not affected. Here's the two hosts' ports: esx01 contains net02. sw01#sh run int gi11 interface gigabitethernet11 description esx01 lldp med disable switchport trunk allowed vlan add 5-7,11-13,100 switchport trunk native vlan 27 ! sw01#sh run int gi44 interface gigabitethernet44 description net01-1 lldp med disable switchport mode access switchport access vlan 12 ! Here's the trunk on sw01. sw01#sh run int gi1 interface gigabitethernet1 description "trunk to core01" lldp med disable switchport trunk allowed vlan add 4-7,11-13,27,100 ! And the other end of the trunk on core01. interface gigabitethernet1 description sw01 macro description switch switchport trunk allowed vlan add 2-7,11-16,27,100 ! I have a monitor port on core01, thus: core01#sh run int gi12 interface gigabitethernet12 description "monitor port" port monitor GigabitEthernet 1 ! And the monitor port on core01 sees unicast traffic going between net01 and net02, both of which are on sw01! I've verified this with a monitor port on sw01 that sees the net01-net02 unicast traffic leaving via gi1 too. sw01 knows that both of those hosts are on ports that are not its trunk port: :) ratchet$ arp -a | grep net net02.2ndsiteinc.com (10.12.0.11) at 00:0C:29:1A:66:15 [ether] on eth0 net01.2ndsiteinc.com (10.12.0.10) at 00:11:43:D8:9F:94 [ether] on eth0 sw01#sh mac addr addr 00:0C:29:1A:66:15 Aging time is 300 sec Vlan Mac Address Port Type -------- --------------------- ---------- ---------- 12 00:0c:29:1a:66:15 gi11 dynamic sw01#sh mac addr addr 00:11:43:D8:9F:94 Aging time is 300 sec Vlan Mac Address Port Type -------- --------------------- ---------- ---------- 12 00:11:43:d8:9f:94 gi44 dynamic I also brought up an unused port on sw01 on vlan 12, but the unicast traffic was (as best as I could tell) not coming out that port. So it doesn't look like sw01 is pushing it out all its ports, just the right ports and also gi1! I've verified that sw01 is not filling up its address-table: sw01#sh mac addr count This may take some time. Capacity : 8192 Free : 7983 Used : 208 The full configs for both core01 and sw01 are available: core01, sw01. Finally, versions: sw01#sh ver SW version 1.1.2.0 ( date 12-Nov-2011 time 23:34:26 ) Boot version 1.0.0.4 ( date 08-Apr-2010 time 16:37:57 ) HW version V01 core01#sh ver SW version 1.1.2.0 ( date 12-Nov-2011 time 23:34:26 ) Boot version 1.1.0.6 ( date 11-May-2011 time 18:31:00 ) HW version V01 So my understanding is this: sw01 should take unicast traffic for net01 and send it only out net02's port, and vice versa; none of it should go out sw01's uplink. But core01, receiving traffic on gi1 for a host it knows is on gi1, is right in sending it out all of its ports. (That is: sw01 is misbehaving, but core01 is doing what it should given the circumstances.) My question is: Why is sw01 sending that unicast traffic out its uplink, gi1? (And pre-emptively: yes, I know SG300s leave much to be desired, and yes, we should have spanning-tree enabled, but that's where I'm at right now.)

    Read the article

  • Windows 7 Machine Makes Router Drop -All- Wireless Connections [closed]

    - by Hammer Bro.
    Note: I accidentally originally posted this question over at SuperUser, and I still think the issue is caused by some low-level networking practice of Windows 7, but I think the expertise here would be more apt to figuring it out. Apologies for the cross-post. Some background: My home network consists of my Desktop, a two-month old Windows 7 (x64) machine which is online most frequently (N-spec), as well as three other Windows XP laptops (all G) that only connect every now and then (one for work, one for Netflix, and the other for infrequent regular laptop uses). I used to have a Belkin F5D8236-4 wireless router, and everything worked great. A week ago, however, I found out that the Belkin absolutely in no way would establish a VPN connection, something that has become important for work. So I bought a Netgear WNR3500v2/U/L. The wireless was acting a little sketchy at first for just the Windows 7 machine, but I thought it had something to do with 802.11N and I was in a hurry so I just fished up an ethernet cable and disabled the computer's wireless. It has now become apparent, though, that whenever the Windows 7 machine is connected to the router, all wireless connections become unstable. I was using my work laptop for a solid six hours today with no trouble, having multiple SSH connections open over VPN and streaming internet radio in the background. Then, within two minutes of turning on this Windows 7 box, I had lost all connectivity over the wireless. And I was two feet away from the router. The same sort of thing happens on all of the other laptops -- Netflix can be playing stuff all weekend, but if I come up here and do things on this (W7) computer, the streaming will be dead within ten minutes. So here are my basic observations: If the Windows 7 machine is off, then all connections will have a Signal Strength of Very Good or Excellent and a Speed of 48-54 Mbps for an indefinite amount of time. Shortly after the Windows 7 machine is turned on, all wireless connections will experience a consistent decline in Speed down to 1.0 Mbps, eventually losing their connection entirely. These machines will continue to maintain 70% signal strength, as observed by themselves and router. Once dropped, a wireless connection will have difficulty reconnecting. And, if a connection manages to become established, it will quickly drop off again. The Windows 7 machine itself will continue to function just fine if it's using a wired connection, although it will experience these same issues over the wireless. All of the drivers and firmwares are up to date, and this happened both with the stock Netgear firmware as well as the (current) DD-WRT. What I've tried: Making sure each computer is being assigned a distinct IP. (They are.) Disabling UPnP and Stateful Packet Inspection on the router. Disabling Network Sharing, SSDP Discovery, TCP/IP NetBios Helper and Computer Browser services on the Windows 7 machine. Disabling QoS Packet Scheduler, IPv6, and Link Layer Topology Discovery options on my ethernet controller (leaving only Client for Microsoft Networks, File and Printer Sharing, and IPv4 enabled). What I think: It seems awfully similar to the problems discussed in detail at http://social.msdn.microsoft.com/Forums/en/wsk/thread/1064e397-9d9b-4ae2-bc8e-c8798e591915 (which was both the most relevant and concrete information I could dig up on the internet). I still think that something the Windows 7 IP stack (or just Operating System itself) is doing is giving the router fits. However, I could be wrong, because I have two key differences. One is that most instances of this problem are reported as the entire router dying or restarting, and mine still works just fine over the wired connection. The other is that it's a new router, tested with both the factory firmware and the (I assume) well-maintained DD-WRT project. Even if Windows 7 is still secretly sending IPv6 packets or the TCP Window Scaling implementation that I hear Vista caused some trouble with (even though I've tried my best to disable anything fancy), this router should support those functions. I don't want to get a new or a replacement router unless someone can convince me that this is a defective unit. But the problem seems too specific and predictable by my instincts to be a hardware hiccup. And I don't want to deal with the inevitable problems that always seem to take half a day to resolve when getting a new router, since I'm frantically working (including tomorrow) to complete a project by next week's deadline. Plus, I think in the worst case scenario, I could keep this router connected directly to the modem, disable its wireless entirely, and connect the old Belkin to it directly. That should allow me to still use VPN (although I'll have to plug my work laptop directly into that router), and then maintain wireless connections for all of the other computers. But that feels so wrong to me. Anyone have any ideas what the cause and possible solution could be? Clarifications: The Windows 7 machine is directly connected via an ethernet cable to the router for everything above. But while it is online, all other computers' wireless connections become unusable. It is not an issue of signal strength or interference -- no other devices within scanning range are using Channel 1, and the problem will affect computers that are literally feet away from the router with 95% signal strength.

    Read the article

  • Can't configure PAM + LDAP on Debian Lenny - Getting error=49 on server logs

    - by Jorge Suárez de Lis
    I've been migrating some servers and desktops using Ubuntu 10.04 from getting the users from an old OpenLDAP implementation to a newer Centos Active Directory. I haven't had any problems so far, until I reached a Debian Lenny server. I've set up the server as the others, setting /etc/ldap.conf and /etc/ldap/ldap.conf. However, when I issue "getent passwd", I get nothing from the LDAP server. Reading the pam_ldap manpage, I realized that /etc/ldap.conf was not an accepted file by pam_ldap -it worked with Ubuntu though-, so I renamed it to /etc/pam_ldap.conf. Same result. However, once I've changed the name of this file, when I login using SSH I get this on the LDAP server logs: [20/Jul/2012:11:19:40 +0200] conn=16501 fd=155 slot=155 connection from x.x.x.50 to 10.1.176.237 [20/Jul/2012:11:19:40 +0200] conn=16501 op=0 BIND dn="uid=ubuntu,ou=Applications,ou=CITIUS,dc=inv,dc=usc,dc=es" method=128 version=3 [20/Jul/2012:11:19:40 +0200] conn=16501 op=0 RESULT err=0 tag=97 nentries=0 etime=0 dn="uid=ubuntu,ou=applications,ou=citius,dc=inv,dc=usc,dc=es" [20/Jul/2012:11:19:40 +0200] conn=16501 op=1 SRCH base="ou=People,ou=CITIUS,dc=inv,dc=usc,dc=es" scope=2 filter="(uid=jorge.suarez)" attrs=ALL [20/Jul/2012:11:19:40 +0200] conn=16501 op=1 RESULT err=0 tag=101 nentries=1 etime=0 notes=U [20/Jul/2012:11:19:40 +0200] conn=16501 op=2 BIND dn="uid=jorge.suarez,ou=People,ou=CITIUS,dc=inv,dc=usc,dc=es" method=128 version=3 [20/Jul/2012:11:19:40 +0200] conn=16501 op=2 RESULT err=49 tag=97 nentries=0 etime=0 The password isn't working. I don't know that could be wrong, anything else seems to be OK. That user/password is working from another clients: [20/Jul/2012:11:29:39 +0200] conn=16528 fd=188 slot=188 connection from x.x.x.224 to 10.1.176.237 [20/Jul/2012:11:29:39 +0200] conn=16528 op=0 BIND dn="uid=ubuntu,ou=Applications,ou=CITIUS,dc=inv,dc=usc,dc=es" method=128 version=3 [20/Jul/2012:11:29:39 +0200] conn=16528 op=0 RESULT err=0 tag=97 nentries=0 etime=0 dn="uid=ubuntu,ou=applications,ou=citius,dc=inv,dc=usc,dc=es" [20/Jul/2012:11:29:39 +0200] conn=16528 op=1 SRCH base="ou=People,ou=CITIUS,dc=inv,dc=usc,dc=es" scope=2 filter="(uid=jorge.suarez)" attrs=ALL [20/Jul/2012:11:29:39 +0200] conn=16528 op=1 RESULT err=0 tag=101 nentries=1 etime=0 notes=U [20/Jul/2012:11:29:39 +0200] conn=16528 op=2 BIND dn="uid=jorge.suarez,ou=People,ou=CITIUS,dc=inv,dc=usc,dc=es" method=128 version=3 [20/Jul/2012:11:29:39 +0200] conn=16528 op=2 RESULT err=0 tag=97 nentries=0 etime=0 dn="uid=jorge.suarez,ou=people,ou=citius,dc=inv,dc=usc,dc=es" I'm using SSHA for storing passwords on the LDAP server. Maybe this is not supported by Debian Lenny? On pam_ldap.conf, I've set up this, as in all the other servers: # Do not hash the password at all; presume # the directory server will do it, if # necessary. This is the default. pam_password md5 Also tried clear, but it didn't work. Anyways, it's weird that issuing getent passwd still gets me no users. However, if I use pamtest from the package libpam-dotfile to test login, it works. # pamtest ssh jorge.suarez Trying to authenticate <jorge.suarez> for service <ssh>. Password: Authentication successful. # pamtest foo jorge.suarez Trying to authenticate <jorge.suarez> for service <foo>. Password: Authentication successful. But "su" won't work also: # su jorge.suarez Id. descoñecido: jorge.suarez Just the output from getent passwd : # getent passwd root:x:0:0:root:/root:/bin/bash daemon:x:1:1:daemon:/usr/sbin:/bin/sh bin:x:2:2:bin:/bin:/bin/sh sys:x:3:3:sys:/dev:/bin/sh sync:x:4:65534:sync:/bin:/bin/sync games:x:5:60:games:/usr/games:/bin/sh man:x:6:12:man:/var/cache/man:/bin/sh lp:x:7:7:lp:/var/spool/lpd:/bin/sh mail:x:8:8:mail:/var/mail:/bin/sh news:x:9:9:news:/var/spool/news:/bin/sh uucp:x:10:10:uucp:/var/spool/uucp:/bin/sh proxy:x:13:13:proxy:/bin:/bin/sh www-data:x:33:33:www-data:/var/www:/bin/sh backup:x:34:34:backup:/var/backups:/bin/sh list:x:38:38:Mailing List Manager:/var/list:/bin/sh irc:x:39:39:ircd:/var/run/ircd:/bin/sh gnats:x:41:41:Gnats Bug-Reporting System (admin):/var/lib/gnats:/bin/sh nobody:x:65534:65534:nobody:/nonexistent:/bin/sh libuuid:x:100:101::/var/lib/libuuid:/bin/sh Debian-exim:x:101:103::/var/spool/exim4:/bin/false statd:x:102:65534::/var/lib/nfs:/bin/false sshd:x:104:65534::/var/run/sshd:/usr/sbin/nologin luser:x:1000:1000:Usuario local de Burdeos,,,:/home/luser:/bin/bash messagebus:x:105:107::/var/run/dbus:/bin/false sge-admin:x:1001:1001:Administrador do SGE,,,:/home/cluster/sge-admin:/bin/bash ntp:x:107:110::/home/ntp:/bin/false haldaemon:x:108:111:Hardware abstraction layer,,,:/var/run/hald:/bin/false vde2-net:x:109:114::/var/run/vde2:/bin/false uml-net:x:110:115::/home/uml-net:/bin/false polkituser:x:111:116:PolicyKit,,,:/var/run/PolicyKit:/bin/false Debian-pxe:x:113:65534:Dummy user for Debian pxe package,,,:/home/Debian-pxe:/bin/false Nscd was stopped from the beginning.

    Read the article

  • Postmortem debugging with WinDBG.

    - by Drazar
    I have an WCF-service running on an server, and occasionally(1-2 times every month) it throws an COMException with the informative message ”Unknown error (0x8005008)”. When i googled for this particular error I only got threads about problems when creating virtual directories in IIS. And the source code hasn’t anything with making a virtual directory in IIS. DirectoryServiceLib.LdapProvider.Directory - CreatePost - Could not create employee for 195001010000,000000000000: System.Runtime.InteropServices.COMException (0x80005008): Unknown error (0x80005008) at System.DirectoryServices.PropertyValueCollection.PopulateList I've taken a memorydump when I catch the Exception for further analysis in WinDBG. After switching to the right thread I executed the !CLRStack command: 000000001b8ab6d8 000000007708671a [NDirectMethodFrameStandalone: 000000001b8ab6d8] Common.MemoryDump.MiniDumpWriteDump(IntPtr, Int32, IntPtr, MINIDUMP_TYPE, IntPtr, IntPtr, IntPtr) 000000001b8ab680 000007ff002808d8 DomainBoundILStubClass.IL_STUB_PInvoke(IntPtr, Int32, IntPtr, MINIDUMP_TYPE, IntPtr, IntPtr, IntPtr) 000000001b8ab780 000007ff00280812 Common.MemoryDump.CreateMiniDump(System.String) 000000001b8ab7e0 000007ff0027b218 DirectoryServiceLib.LdapProvider.Directory.CreatePost(System.String, DirectoryServiceLib.Model.Post, DirectoryServiceLib.Model.Presumptions, Services.Common.SourceEnum, System.String) 000000001b8ad6d8 000007fef8816869 [HelperMethodFrame: 000000001b8ad6d8] 000000001b8ad820 000007feec2b6c6f System.DirectoryServices.PropertyValueCollection.PopulateList() 000000001b8ad860 000007feec225f0f System.DirectoryServices.PropertyValueCollection..ctor(System.DirectoryServices.DirectoryEntry, System.String) 000000001b8ad8a0 000007feec22d023 System.DirectoryServices.PropertyCollection.get_Item(System.String) 000000001b8ad8f0 000007ff00274d34 Common.DirectoryEntryExtension.GetStringAttribute(System.String) 000000001b8ad940 000007ff0027f507 DirectoryServiceLib.LdapProvider.DirectoryPost.Copy(DirectoryServiceLib.LdapProvider.DirectoryPost) 000000001b8ad980 000007ff0027a7cf DirectoryServiceLib.LdapProvider.Directory.CreatePost(System.String, DirectoryServiceLib.Model.Post, DirectoryServiceLib.Model.Presumptions, Services.Common.SourceEnum, System.String) 000000001b8adbe0 000007ff00279532 DirectoryServiceLib.WCFDirectory.CreatePost(System.String, DirectoryServiceLib.Model.Post, DirectoryServiceLib.Model.Presumptions, Services.Common.SourceEnum, System.String) 000000001b8adc60 000007ff001f47bd DynamicClass.SyncInvokeCreatePost(System.Object, System.Object[], System.Object[]) My conclusion is that it fails when the code is calling System.DirectoryServices.PropertyCollection.get_Item(System.String). So after issuing an !CLRStack -a I get this result: 000000001b8ad8a0 000007feec22d023 System.DirectoryServices.PropertyCollection.get_Item(System.String) PARAMETERS: this = <no data> propertyName = <no data> LOCALS: <CLR reg> = 0x0000000001dcef78 <no data> My very first question is why does it display no data on the propertyname? I am kinda new on Windbg. However I executed an dumpobject on = 0x0000000001dcef78: 0:013> !do 0x0000000001dcef78 Name: System.String MethodTable: 000007fef66d6960 EEClass: 000007fef625eec8 Size: 74(0x4a) bytes File: C:\Windows\Microsoft.Net\assembly\GAC_64\mscorlib\v4.0_4.0.0.0__b77a5c561934e089\mscorlib.dll String: personalprescriptioncode Fields: MT Field Offset Type VT Attr Value Name 000007fef66dc848 40000ed 8 System.Int32 1 instance 24 m_stringLength 000007fef66db388 40000ee c System.Char 1 instance 70 m_firstChar 000007fef66d6960 40000ef 10 System.String 0 shared static Empty >> Domain:Value 0000000000174e10:00000000019d1420 000000001a886f50:00000000019d1420 << So when the source code wants to fetch the personalprescriptioncode from Active Directory(what is used for persistence layer) it fails. Looking back at the stack it is when issuing the Copy method. DirectoryServiceLib.LdapProvider.DirectoryPost.Copy(DirectoryServiceLib.LdapProvider.DirectoryPost) So looking in the sourcecode: DirectoryPost postInLimbo = DirectoryPostFactory.Instance().GetDirectoryPost(LdapConfigReader.Instance().GetConfigValue("LimboDN"), idGenPerson.ID.UserId); if (postInLimbo != null) newPost.Copy(postInLimbo); This code is looking for another post in OU=limbo with the same UserId and if it finds one it copies the attributes to the new post. In this case it does and it fails with personalprescriptioncode. I've looked in Active Directory under OU=Limbo and the post exist there with the attribute personalprescriptioncode=31243. Question 1: Why does it display no data for some of the PARAMETERS and LOCALS? Is it the GC who has cleaned up before the memorydump had been created. Question 2: Is there anymore i can do to get to the solution to this problem?

    Read the article

  • Which option do you like the most?

    - by spderosso
    I need to show a list of movies ordered by the date they were registered in the system and the amount of comments users have made about them. E.g: Title | Register Date | # of comments Gladiator | 02-01-2010 | 30 Matrix | 01-02-2010 | 20 It is a web application that follows MVC. So, I have the object "Movie", there is a MovieManager interface that is used for retrieving/persisting data to the database and a servlet RecentlyAddedMovies.java that forwards to a .jsp to render the list. I will explain my problem by showing an example of how I am doing another similar task: Showing top ranked movies. Regarding the MovieManager: /** * Returns a collection of the x best-ranked movies * @param x the amount of movies to return * @return A collection of movies * @throws SQLException */ public Collection<Movie> getTopX(int x) throws SQLException; The servlet: public class Top5 extends HttpServlet{ @Override protected void doGet(HttpServletRequest req, HttpServletResponse resp) throws ServletException, IOException { MovieManager movManager = JDBCMovieManager.getInstance(); try { req.setAttribute("movies", movManager.getTopX(5)); } catch (SQLException e) { // TODO Auto-generated catch block e.printStackTrace(); } req.getRequestDispatcher("listMovies.jsp").forward(req, resp); } } And the listMovies.jsp has is some place something like: <c:forEach var="movie" items="${movies}"> ... <c:out value="${movie.title}"/> ... </c:forEach> The difference between the top5 problem and the problem I am trying to solve is that with the top5 thing all the data I needed to show in the view where part of the "Movie" object, so the MovieManager simply returned a Collection of Movies. Now, the MovieManager must retrieve an ordered list of (Movie, # of comments). The Movie object has a collection of Comment associated but of course, it lacks an instance variable with the length of that collection. The following things came into my mind: a) Create a MovieView object or something like that, that is only used by the "View" and "Controller" layer of the MVC which is not part of the "Model" in the sense that is used only for easy retrieval and display of information. This MovieView object will have a title, registerDate and # of Comments so the thing would look something like this: /** * Returns a collection of the x most recently added movies * @param x the amount of movies to return * @return A collection of movies * @throws SQLException */ public Collection<Movie2> getXRecentlyAddedMovies(int x) throws SQLException; The .jsp: <c:forEach var="movie2" items="${movies2}"> ... <c:out value="${movie2.title}"/> <c:out value="${movie2.registerDate}"/> <c:out value="${movie2.noOfComments}"/> ... </c:forEach> b) Add to the Collection the movie has, empty Comments (there is no actual need to retrieve all the comment data from the database) so that the comment's collection.size() will show the number of comments that movie has. (This sounds horrible to me) c) Add to the "Movie" object an instance field noOfComments with the setter and getter that will help with this type of cases even if, looking only from the object point of view, it is a redundant field since it already has a Collection that is ".size()"able . d) Make the MovieManager retrieve some structure like a Map or a Collection of a two component array or something like that that will contain both a Movie object and the associated number of comments and with JSLT iterate over it and show it accordingly.

    Read the article

  • Error Handling without Exceptions

    - by James
    While searching SO for approaches to error handling related to business rule validation , all I encounter are examples of structured exception handling. MSDN and many other reputable development resources are very clear that exceptions are not to be used to handle routine error cases. They are only to be used for exceptional circumstances and unexpected errors that may occur from improper use by the programmer (but not the user.) In many cases, user errors such as fields that are left blank are common, and things which our program should expect, and therefore are not exceptional and not candidates for use of exceptions. QUOTE: Remember that the use of the term exception in programming has to do with the thinking that an exception should represent an exceptional condition. Exceptional conditions, by their very nature, do not normally occur; so your code should not throw exceptions as part of its everyday operations. Do not throw exceptions to signal commonly occurring events. Consider using alternate methods to communicate to a caller the occurrence of those events and leave the exception throwing for when something truly out of the ordinary happens. For example, proper use: private void DoSomething(string requiredParameter) { if (requiredParameter == null) throw new ArgumentExpcetion("requiredParameter cannot be null"); // Remainder of method body... } Improper use: // Renames item to a name supplied by the user. Name must begin with an "F". public void RenameItem(string newName) { // Items must have names that begin with "F" if (!newName.StartsWith("F")) throw new RenameException("New name must begin with /"F/""); // Remainder of method body... } In the above case, according to best practices, it would have been better to pass the error up to the UI without involving/requiring .NET's exception handling mechanisms. Using the same example above, suppose one were to need to enforce a set of naming rules against items. What approach would be best? Having the method return a enumerated result? RenameResult.Success, RenameResult.TooShort, RenameResult.TooLong, RenameResult.InvalidCharacters, etc. Using an event in a controller class to report to the UI class? The UI calls the controller's RenameItem method, and then handles an AfterRename event that the controller raises and that has rename status as part of the event args? The controlling class directly references and calls a method from the UI class that handles the error, e.g. ReportError(string text). Something else... ? Essentially, I want to know how to perform complex validation in classes that may not be the Form class itself, and pass the errors back to the Form class for display -- but I do not want to involve exception handling where it should not be used (even though it seems much easier!) Based on responses to the question, I feel that I'll have to state the problem in terms that are more concrete: UI = User Interface, BLL = Business Logic Layer (in this case, just a different class) User enters value within UI. UI reports value to BLL. BLL performs routine validation of the value. BLL discovers rule violation. BLL returns rule violation to UI. UI recieves return from BLL and reports error to user. Since it is routine for a user to enter invalid values, exceptions should not be used. What is the right way to do this without exceptions?

    Read the article

  • How to reduce Entity Framework 4 query compile time?

    - by Rup
    Summary: We're having problems with EF4 query compilation times of 12+ seconds. Cached queries will only get us so far; are there any ways we can actually reduce the compilation time? Is there anything we might be doing wrong we can look for? Thanks! We have an EF4 model which is exposed over the WCF services. For each of our entity types we expose a method to fetch and return the whole entity for display / edit including a number of referenced child objects. For one particular entity we have to .Include() 31 tables / sub-tables to return all relevant data. Unfortunately this makes the EF query compilation prohibitively slow: it takes 12-15 seconds to compile and builds a 7,800-line, 300K query. This is the back-end of a web UI which will need to be snappier than that. Is there anything we can do to improve this? We can CompiledQuery.Compile this - that doesn't do any work until first use and so helps the second and subsequent executions but our customer is nervous that the first usage shouldn't be slow either. Similarly if the IIS app pool hosting the web service gets recycled we'll lose the cached plan, although we can increase lifetimes to minimise this. Also I can't see a way to precompile this ahead of time and / or to serialise out the EF compiled query cache (short of reflection tricks). The CompiledQuery object only contains a GUID reference into the cache so it's the cache we really care about. (Writing this out it occurs to me I can kick off something in the background from app_startup to execute all queries to get them compiled - is that safe?) However even if we do solve that problem, we build up our search queries dynamically with LINQ-to-Entities clauses based on which parameters we're searching on: I don't think the SQL generator does a good enough job that we can move all that logic into the SQL layer so I don't think we can pre-compile our search queries. This is less serious because the search data results use fewer tables and so it's only 3-4 seconds compile not 12-15 but the customer thinks that still won't really be acceptable to end-users. So we really need to reduce the query compilation time somehow. Any ideas? Profiling points to ELinqQueryState.GetExecutionPlan as the place to start and I have attempted to step into that but without the real .NET 4 source available I couldn't get very far, and the source generated by Reflector won't let me step into some functions or set breakpoints in them. The project was upgraded from .NET 3.5 so I have tried regenerating the EDMX from scratch in EF4 in case there was something wrong with it but that didn't help. I have tried the EFProf utility advertised here but it doesn't look like it would help with this. My large query crashes its data collector anyway. I have run the generated query through SQL performance tuning and it already has 100% index usage. I can't see anything wrong with the database that would cause the query generator problems. Is there something O(n^2) in the execution plan compiler - is breaking this down into blocks of separate data loads rather than all 32 tables at once likely to help? Setting EF to lazy-load didn't help. I've bought the pre-release O'Reilly Julie Lerman EF4 book but I can't find anything in there to help beyond 'compile your queries'. I don't understand why it's taking 12-15 seconds to generate a single select across 32 tables so I'm optimistic there's some scope for improvement! Thanks for any suggestions! We're running against SQL Server 2008 in case that matters and XP / 7 / server 2008 R2 using RTM VS2010.

    Read the article

  • A methology that allows for a single Java code base covering many different versions?

    - by Thorbjørn Ravn Andersen
    I work in a small shop where we have a LOT of legacy Cobol code and where a methology has been adopted to allow us to minimize forking and branching as much as possible. For a given release we have three levels: CORE - bottom layer, this code is common to all releases GROUP - optional code common to several customers. CUSTOMER - optional code specific for a single customer. When a program is needed, it is first searched for in CUSTOMER, then in GROUP and finally in CORE. A given application for us invokes many programs which all are looked for in this sequence (think exe files and PATH under Windows). We also have Java programs interacting with this legacy code, and as the core-group-customer lookup mehchanism does not lend it self easily to Java it has tended to grow in a CVS branch for each customer, requiring much too much maintainance. The Java part and the backend part tend to be developed in parallel. I have been assigned to figure out a way to make the two worlds meet. Essentially we want a Java enviornment which allows us to have a single code base with sources for each release, where we easily can select a group and a customer and work with the application as it goes for that customer, and then easily switch to another codeset and THAT customer. I was thinking of perhaps a scenario with an Eclipse project for each core, customer, and group and then use Project Sets to select those we need for a given scenario. The problem I cannot get my head about, is how we would create robust code in the CORE projects which will work regardless of which group and customer is selected. A Factory class which knows which sub class of a passed Class object to invoke instead of each and every new? Others must have had similar code base management problems. Anybody with experiences to share? EDIT: The conclusion to this problem above has been that CVS needs to be replaced with a source code management system better suited for dealing with many branches concurrently and the migration of source from one component to the other while keeping history. Inspired by the recent migration by slf4j and logback we are currently looking at git as it handles branches very well. We've considered subversion and mercurial too but git appears to be better for single location, multibranched projects. I've asked about Perforce in another question, but my personal inclination is towards open source solutions for something as crucial as this. EDIT: After some more pondering, we've found that our actual pain point is that we use branches in CVS, and that branches in CVS are the easiest to work with if you branch ALL files! The revised conclusion is that we can do this with CVS alone, by switching to a forest of java projects, each corresponding to one of the levels above, and use the Eclipse build paths to tie them together so each CUSTOMER version pulls in the appropriate GROUP and CORE project. We still want to switch to a better versioning system but this is so important a decision so we want to delay it as much as possible. EDIT: I now have a proof-of-concept implementation of the CORE-GROUP-CUSTOMER concept using Google Guice 2.0 - the @ImplementedBy tag is just what we need. I wonder what everybody else does? Using if's all over the place? EDIT: Now I also need this functionality for web applications. Guice was until the JSR-330 is in place. Anybody with versioning experience? EDIT: JSR-330/299 is now in place with the JEE6 reference implementation Weld based on JBoss Seam and I have reimplemented the proof-of-concept with Weld and can see that if we use @Alternative along with ... in beans.xml we can get the behaviour we desire. I.e. provide a new implementation for a given functionality in CORE without changing a bit in the CORE jars. Initial reading up on the Servlet 3.0 specification indicates that it may support the same functionality for web application resources (not code). We will now do initial testing on the real application.

    Read the article

  • Sending emails in web applications

    - by Denise
    Hi everyone, I'm looking for some opinions here, I'm building a web application which has the fairly standard functionality of: Register for an account by filling out a form and submitting it. Receive an email with a confirmation code link Click the link to confirm the new account and log in When you send emails from your web application, it's often (usually) the case that there will be some change to the persistence layer. For example: A new user registers for an account on your site - the new user is created in the database and an email is sent to them with a confirmation link A user assigns a bug or issue to someone else - the issue is updated and email notifications are sent. How you send these emails can be critical to the success of your application. How you send them depends on how important it is that the intended recipient receives the email. We'll look at the following four strategies in relation to the case where the mail server is down, using example 1. TRANSACTIONAL & SYNCHRONOUS The sending of the email fails and the user is shown an error message saying that their account could not be created. The application will appear to be slow and unresponsive as the application waits for the connection timeout. The account is not created in the database because the transaction is rolled back. TRANSACTIONAL & ASYNCHRONOUS The transactional definition here refers to sending the email to a JMS queue or saving it in a database table for another background process to pick up and send. The user account is created in the database, the email is sent to a JMS queue for processing later. The transaction is successful and committed. The user is shown a message saying that their account was created and to check their email for a confirmation link. It's possible in this case that the email is never sent due to some other error, however the user is told that the email has been sent to them. There may be some delay in getting the email sent to the user if application support has to be called in to diagnose the email problem. NON-TRANSACTIONAL & SYNCHRONOUS The user is created in the database, but the application gets a timeout error when it tries to send the email with the confirmation link. The user is shown an error message saying that there was an error. The application is slow and unresponsive as it waits for the connection timeout When the mail server comes back to life and the user tries to register again, they are told their account already exists but has not been confirmed and are given the option of having the email re-sent to them. NON-TRANSACTIONAL & ASYNCHRONOUS The only difference between this and transactional & asynchronous is that if there is an error sending the email to the JMS queue or saving it in the database, the user account is still created but the email is never sent until the user attempts to register again. What I'd like to know is what have other people done here? Can you recommend any other solutions other than the 4 I've mentioned above? What's a reasonable way of approaching this problem? I don't want to over-engineer a system that's dealing with the (hopefully) rare situation where my mail server goes down! The simplest thing to do is to code it synchronously, but are there any other pitfalls to this approach? I guess I'm wondering if there's a best practice, I couldn't find much out there by googling.

    Read the article

  • vSphere ESX 5.5 hosts cannot connect to NFS Server

    - by Gerald
    Summary: My problem is I cannot use the QNAP NFS Server as an NFS datastore from my ESX hosts despite the hosts being able to ping it. I'm utilising a vDS with LACP uplinks for all my network traffic (including NFS) and a subnet for each vmkernel adapter. Setup: I'm evaluating vSphere and I've got two vSphere ESX 5.5 hosts (node1 and node2) and each one has 4x NICs. I've teamed them all up using LACP/802.3ad with my switch and then created a distributed switch between the two hosts with each host's LAG as the uplink. All my networking is going through the distributed switch, ideally, I want to take advantage of DRS and the redundancy. I have a domain controller VM ("Central") and vCenter VM ("vCenter") running on node1 (using node1's local datastore) with both hosts attached to the vCenter instance. Both hosts are in a vCenter datacenter and a cluster with HA and DRS currently disabled. I have a QNAP TS-669 Pro (Version 4.0.3) (TS-x69 series is on VMware Storage HCL) which I want to use as the NFS server for my NFS datastore, it has 2x NICs teamed together using 802.3ad with my switch. vmkernel.log: The error from the host's vmkernel.log is not very useful: NFS: 157: Command: (mount) Server: (10.1.2.100) IP: (10.1.2.100) Path: (/VM) Label (datastoreNAS) Options: (None) cpu9:67402)StorageApdHandler: 698: APD Handle 509bc29f-13556457 Created with lock[StorageApd0x411121] cpu10:67402)StorageApdHandler: 745: Freeing APD Handle [509bc29f-13556457] cpu10:67402)StorageApdHandler: 808: APD Handle freed! cpu10:67402)NFS: 168: NFS mount 10.1.2.100:/VM failed: Unable to connect to NFS server. Network Setup: Here is my distributed switch setup (JPG). Here are my networks. 10.1.1.0/24 VM Management (VLAN 11) 10.1.2.0/24 Storage Network (NFS, VLAN 12) 10.1.3.0/24 VM vMotion (VLAN 13) 10.1.4.0/24 VM Fault Tolerance (VLAN 14) 10.2.0.0/24 VM's Network (VLAN 20) vSphere addresses 10.1.1.1 node1 Management 10.1.1.2 node2 Management 10.1.2.1 node1 vmkernel (For NFS) 10.1.2.2 node2 vmkernel (For NFS) etc. Other addresses 10.1.2.100 QNAP TS-669 (NFS Server) 10.2.0.1 Domain Controller (VM on node1) 10.2.0.2 vCenter (VM on node1) I'm using a Cisco SRW2024P Layer-2 switch (Jumboframes enabled) with the following setup: LACP LAG1 for node1 (Ports 1 through 4) setup as VLAN trunk for VLANs 11-14,20 LACP LAG2 for my router (Ports 5 through 8) setup as VLAN trunk for VLANs 11-14,20 LACP LAG3 for node2 (Ports 9 through 12) setup as VLAN trunk for VLANs 11-14,20 LACP LAG4 for the QNAP (Ports 23 and 24) setup to accept untagged traffic into VLAN 12 Each subnet is routable to another, although, connections to the NFS server from vmk1 shouldn't need it. All other traffic (vSphere Web Client, RDP etc.) goes through this setup fine. I tested the QNAP NFS server beforehand using ESX host VMs atop of a VMware Workstation setup with a dedicated physical NIC and it had no problems. The ACL on the NFS Server share is permissive and allows all subnet ranges full access to the share. I can ping the QNAP from node1 vmk1, the adapter that should be used to NFS: ~ # vmkping -I vmk1 10.1.2.100 PING 10.1.2.100 (10.1.2.100): 56 data bytes 64 bytes from 10.1.2.100: icmp_seq=0 ttl=64 time=0.371 ms 64 bytes from 10.1.2.100: icmp_seq=1 ttl=64 time=0.161 ms 64 bytes from 10.1.2.100: icmp_seq=2 ttl=64 time=0.241 ms Netcat does not throw an error: ~ # nc -z 10.1.2.100 2049 Connection to 10.1.2.100 2049 port [tcp/nfs] succeeded! The routing table of node1: ~ # esxcfg-route -l VMkernel Routes: Network Netmask Gateway Interface 10.1.1.0 255.255.255.0 Local Subnet vmk0 10.1.2.0 255.255.255.0 Local Subnet vmk1 10.1.3.0 255.255.255.0 Local Subnet vmk2 10.1.4.0 255.255.255.0 Local Subnet vmk3 default 0.0.0.0 10.1.1.254 vmk0 VM Kernel NIC info ~ # esxcfg-vmknic -l Interface Port Group/DVPort IP Family IP Address Netmask Broadcast MAC Address MTU TSO MSS Enabled Type vmk0 133 IPv4 10.1.1.1 255.255.255.0 10.1.1.255 00:50:56:66:8e:5f 1500 65535 true STATIC vmk0 133 IPv6 fe80::250:56ff:fe66:8e5f 64 00:50:56:66:8e:5f 1500 65535 true STATIC, PREFERRED vmk1 164 IPv4 10.1.2.1 255.255.255.0 10.1.2.255 00:50:56:68:f5:1f 1500 65535 true STATIC vmk1 164 IPv6 fe80::250:56ff:fe68:f51f 64 00:50:56:68:f5:1f 1500 65535 true STATIC, PREFERRED vmk2 196 IPv4 10.1.3.1 255.255.255.0 10.1.3.255 00:50:56:66:18:95 1500 65535 true STATIC vmk2 196 IPv6 fe80::250:56ff:fe66:1895 64 00:50:56:66:18:95 1500 65535 true STATIC, PREFERRED vmk3 228 IPv4 10.1.4.1 255.255.255.0 10.1.4.255 00:50:56:72:e6:ca 1500 65535 true STATIC vmk3 228 IPv6 fe80::250:56ff:fe72:e6ca 64 00:50:56:72:e6:ca 1500 65535 true STATIC, PREFERRED Things I've tried/checked: I'm not using DNS names to connect to the NFS server. Checked MTU. Set to 9000 for vmk1, dvSwitch and Cisco switch and QNAP. Moved QNAP onto VLAN 11 (VM Management, vmk0) and gave it an appropriate address, still had same issue. Changed back afterwards of course. Tried initiating the connection of NAS datastore from vSphere Client (Connected to vCenter or directly to host), vSphere Web Client and the host's ESX Shell. All resulted in the same problem. Tried a path name of "VM", "/VM" and "/share/VM" despite not even having a connection to server. I plugged in a linux system (10.1.2.123) into a switch port configured for VLAN 12 and tried mounting the NFS share 10.1.2.100:/VM, it worked successfully and I had read-write access to it I tried disabling the firewall on the ESX host esxcli network firewall set --enabled false I'm out of ideas on what to try next. The things I'm doing differently from my VMware Workstation setup is the use of LACP with a physical switch and a virtual distributed switch between the two hosts. I'm guessing the vDS is probably the source of my troubles but I don't know how to fix this problem without eliminating it.

    Read the article

  • Wireless internet connection connects but internet does not work (no packets received). Wired does.

    - by Rodney
    When I connect my PC via ethernet cable to my ADSL router it works fine. When I connect via Wireless it connects and the internet will work for a random amount of time and then stop working. It stays connected with a strong signal but no packets are received. My laptop/iphone are right next to it and wireless works fine. If I open the Wireless USB status, it says it is connected to my SSID with full strength (54 mps - I am 3 meteres away from my router) and the activty shows as Packets 594 SENT and 105 RECEIVED (this goes up VERY slowly) I have tried the following: Turned off anitvirus and firewall completely. Tested the wifi signal- I am writing this on my laptop which is next to my PC and also has full wifi strength. Tried a different wireless adapter - I dug out an old PCI wireless card - it does the exact same thing. Compared all wireless settings to my laptop. I can ping google.com and it replies (sometimes with packet loss) When I reboot the PC it will connect for a minute or two (random time) and then just stops again. I tried Firefox, IE etc. no joy I have updated all latest versions (Netgear WG111v2) and drivers Checked Event Log - nothing unusual Ping the router (and even connect as admin for the few minutes when the internet does work) Changed the MTU down to 1200 using DrTCP Checked Device Manager for conflicts - none. I ping the router from the PC (192.168.0.10 - 192.168.0.1) and it replies with 4 packets. BUT, on my router admin page (which I access via http on my laptop wirelessly) - if I ping 192.168.0.10 all packets timeout (pinging my laptop 192.168.0.12 works fine) My router admin page shows the leased IP address for 192.168.0.10 (ie it is definitely talking to the router initially) Now I am out of ideas - please help. I think it is an OS/Software issue as I have tried 2 different wireless adapaters (PCI and USB) with the same result but all other wireless devices work fine around mine). It's not the firewall. It is getting assigned an IP address correctly (my PC gets 192.168.0.10, my laptop is .12) It is assigned by DHCP. As soon as I plug in the ethernet cable it all works fine. Repairing the adapter sometimes helps but it will always stop working after a random time. The wireless adapter always shows as connected with Excellent signal but the internet does not work. I am running Windows XP SP3 and have tried a Netgear WG111v2 USB adapter. Thanks in advance! UPDATE: The internet seems to be working, it is just either sending packets too small or slow to work (some small pages load bits of them very slowly but then hang). XP seems to have a networking diagnostic app - here is the output: Last diagnostic run time: 08/30/10 08:16:38 IP Configuration Diagnostic Invalid IP address info Valid IP address detected: 192.168.0.10 IP Layer Diagnostic Corrupted IP routing table info The default route is valid info The loopback route is valid info The local host route is valid info The local subnet route is valid Invalid ARP cache entries action The ARP cache has been flushed Gateway Diagnostic Gateway info The following proxy configuration is being used by IE: Automatically Detect Settings:Disabled Automatic Configuration Script: Proxy Server: Proxy Bypass list: info This computer has the following default gateway entry(ies): 192.168.0.1 info This computer has the following IP address(es): 192.168.0.10 info The default gateway is in the same subnet as this computer info The default gateway entry is a valid unicast address info The default gateway address was resolved via ARP in 1 try(ies) info The default gateway was reached via ICMP Ping in 1 try(ies) info TCP port 80 on host 65.55.12.249 was successfully reached info The Internet host www.microsoft.com was successfully reached info The default gateway is OK DNS Client Diagnostic DNS - Not a home user scenario info Using Web Proxy: no info Resolving name ok for (www.microsoft.com): yes No DNS servers DNS failure HTTP, HTTPS, FTP Diagnostic HTTP, HTTPS, FTP connectivity info FTP (Passive): Successfully connected to ftp.microsoft.com. info HTTP: Successfully connected to www.microsoft.com. warn HTTPS: Error 12002 connecting to www.microsoft.com: The operation timed out warn HTTPS: Error 12002 connecting to www.passport.net: The operation timed out error Could not make an HTTPS connection. info Redirecting user to support call WinSock Diagnostic WinSock status info All base service provider entries are present in the Winsock catalog. info The Winsock Service provider chains are valid. info Provider entry MSAFD Tcpip [TCP/IP] passed the loopback communication test. info Provider entry MSAFD Tcpip [UDP/IP] passed the loopback communication test. info Provider entry RSVP UDP Service Provider passed the loopback communication test. info Provider entry RSVP TCP Service Provider passed the loopback communication test. info Connectivity is valid for all Winsock service providers. Wireless Diagnostic Wireless - Service disabled Wireless - User SSID action User input required: Specify network name or SSID Wireless - First time setup info The Wireless Network name (SSID) to which the user would like to connect = RodSof Wifi. Wireless - Radio off info Valid IP address detected: 192.168.0.10 Wireless - Out of range Wireless - Hardware issue Wireless - Novice user Wireless - Ad-hoc network Wireless - Less preferred Wireless - 802.1x enabled Wireless - Configuration mismatch Wireless - Low SNR Network Adapter Diagnostic Network location detection info Using home Internet connection Network adapter identification info Network connection: Name=Local Area Connection 2, Device=Realtek RTL8168C(P)/8111C(P) PCI-E Gigabit Ethernet NIC, MediaType=LAN, SubMediaType=LAN info Network connection: Name=Wireless USB, Device=NETGEAR WG111v2 54Mbps Wireless USB 2.0 Adapter, MediaType=LAN, SubMediaType=WIRELESS info Both Ethernet and Wireless connections available, prompting user for selection action User input required: Select network connection info Wireless connection selected Network adapter status info Network connection status: Connected HTTP, HTTPS, FTP Diagnostic HTTP, HTTPS, FTP connectivity info FTP (Active): Successfully connected to ftp.microsoft.com. warn HTTP: Error 12007 connecting to www.microsoft.com: The server name or address could not be resolved warn HTTP: Error 12002 connecting to www.hotmail.com: The operation timed out warn HTTPS: Error 12002 connecting to www.passport.net: The operation timed out warn HTTPS: Error 12002 connecting to www.microsoft.com: The operation timed out error Could not make an HTTP connection. error Could not make an HTTPS connection.

    Read the article

  • My D-Link's Ethernet bridge downlink just got 10-30x slower?

    - by Jay Levitt
    TL;DR: I unplugged my network to move my desk, and now downloading via my DIR-655's Ethernet LAN bridge is 10-30x slower than the Ethernet switch it's plugged into. Background My network is SMC cable modem <-> Cisco firewall <-> Netgear switch <-> D-Link WiFi† | | | | SMC8014 ASA-5505 GS608v2 gigE DIR-655 rev A3 gigE †The DIR-655 is used as an access point, not a router (although what D-Link calls an access point, I'd call a bridge). The "WAN" port is unused; the Netgear connects to the built-in 4-port Ethernet LAN switch, inside the built- in router/firewall. Endpoints: MacBook Pro 17" mid-2010 iPhone 4S Fedora 12 Linux server running reasonably fast dual-Athlon X2, VelociRaptors, etc. All cables are <10 feet, mostly CAT-5e, some CAT-6, all premade. All WiFi endpoints are within three feet of the D-Link. Yesterday I unplugged and rearranged stuff, and now connecting via the D-Link - even through the wired switch, right next to the incoming network cable - is 30x slower than connecting directly to the Netgear switch, on both my MacBook and iPhone. How I'm measuring "slower" I'm mostly using http://speedtest.net, which of course only really measures broadband speeds. I've also installed http://www.speedtest.net/mini.php on my local server, but can't test the iPhone with that. Results Speedtest.net, closest server over Comcast business-class: CONFIG | PING (ms) | DOWN (Mbps) | UP (Mbps) Mac <-> Ethernet <-> Netgear | 9 | 31.6 | 6.8 Mac <-> Ethernet <-> D-Link | 8 | 4.1 | 6.0 Mac <-> WiFi <-> D-Link | 9 | 1.4 | 2.9 iPhone <-> WiFi <-> D-Link | 67 | 0.4 | 1.6 Speedtest Mini on Linux PC: CONFIG | DOWN (Mbps) | UP (Mbps) Mac <-> Ethernet <-> NetGear | 97.2 | 76.9 Mac <-> Ethernet <-> D-Link | 8.2 | 24.2 Mac <-> WiFi <-> D-Link | 1.0 | 8.6 Slow typing in SSH: Mac <-> Ethernet <-> Netgear <-> Linux PC: smooth Mac <-> Ethernet <-> D-Link <-> Linux PC: choppy Note that D-Link upload speeds are normal on broadband, slower locally (but I'd believe that's a D-Link limitation), and always faster than the downloads! Since ssh is choppy just with slow typing, I don't believe it's a throttling-type problem either; that's not a lot of bandwidth. What I've tried Swapping all "good" and "bad" cables Re-plugging "bad" cable from D-Link to Netgear and watching it be the "good" cable pulling cables away from power lines Verify that the Mac auto-detects the D-Link as gigE Try to verify the link speed of the D-Link <- Netgear connection, but the firmware doesn't report that Verify that the D-Link sees no TX/RX errors or collisions Use different Ethernet ports on both Netgear and D-Link Reset the D-Link to factory settings Upgrade the D-Link firmware from 1.21 to 1.35NA, 2010/11/12, the latest Reboot everything at least once On the Mac, disable Wi-Fi during the Ethernet tests, and unplug Ethernet during the Wi-Fi tests Using iStumbler, verify that the D-Link isn't picking overloaded Wi-Fi channels (usually just 1-5 neighbors on my and adjacent channels, average for my apt building) Verify that the only client connected to the Wi-Fi was the iPhone Verify that nothing was being chatty on my network according to the WISH log Enable and disable all sorts of D-Link settings, including forcing WAN auto-detect to gigE So. I don't mind buying a new access point—I wouldn't mind having a dual-link network—but as a guy who's been networking since gated v4 was a drastic rewrite, and who often used physical sniffers in the days before Wireshark, I'm baffled. I hate being baffled. What could I possibly have changed that would result in this? How can I measure it? All I can think of is a static zap—thick carpet, socks, HVAC—but I didn't feel one, and does that really happen anymore? Can I test if it's Ethernet vs. TCP layer slowness? I'm not familiar with modern network utilities; it's hard to Google without hitting "Q: Why is my network slow? A: Is your microwave on?" If I don't get an answer here, will someone big and powerful help me migrate it to serverfault without getting screamed back here? In the words of Inigo Montoya, "I must know." Don't get all Dread Pirate Roberts on me.

    Read the article

  • How to stop OpenGL from applying blending to certain content? (see pics)

    - by RexOnRoids
    Supporting Info: I use cocos2d to draw a sprite (graph background) on the screen (z:-1). I then use cocos2d to draw lines/points (z:0) on top of the background -- and make some calls to OpenGL blending functions before the drawing to SMOOTH out the lines. Problem: The problem is that: aside from producing smooth lines/points, calling these OpenGL blending functions seems to "degrade" the underlying sprite (graph background). As you can see from the images below, the "degraded" background seems to be made darker and less sharp in Case 2. So there is a tradeoff: I can either have (Case 1) a nice background and choppy lines/points, or I can have (Case 2) nice smooth lines/points and a degraded background. But obviously I need both. THE QUESTION: How do I set OpenGL so as to only apply the blending to the layer with the Lines/Points in it and thus leave the background alone? The Code: I have included code of the draw() method of the CCLayer for both cases explained above. As you can see, the code producing the difference between Case 1 and Case 2 seems to be 1 or 2 lines involving OpenGL Blending. Case 1 -- MainScene.h (CCLayer): -(void)draw{ int lastPointX = 0; int lastPointY = 0; GLfloat colorMAX = 255.0f; GLfloat valR; GLfloat valG; GLfloat valB; if([self.myGraphManager ready]){ valR = (255.0f/colorMAX)*1.0f; valG = (255.0f/colorMAX)*1.0f; valB = (255.0f/colorMAX)*1.0f; NSEnumerator *enumerator = [[self.myGraphManager.currentCanvas graphPoints] objectEnumerator]; GraphPoint* object; while ((object = [enumerator nextObject])) { if(object.filled){ /*Commenting out the following two lines induces a problem of making it impossible to have smooth lines/points, but has merit in that it does not degrade the background sprite.*/ //glEnable (GL_BLEND); //glBlendFunc (GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA); glHint (GL_LINE_SMOOTH_HINT, GL_DONT_CARE); glEnable (GL_LINE_SMOOTH); glLineWidth(1.5f); glColor4f(valR, valG, valB, 1.0); ccDrawLine(ccp(lastPointX, lastPointY), ccp(object.position.x, object.position.y)); lastPointX = object.position.x; lastPointY = object.position.y; glPointSize(3.0f); glEnable(GL_POINT_SMOOTH); glHint(GL_POINT_SMOOTH_HINT, GL_NICEST); ccDrawPoint(ccp(lastPointX, lastPointY)); } } } } Case 2 -- MainScene.h (CCLayer): -(void)draw{ int lastPointX = 0; int lastPointY = 0; GLfloat colorMAX = 255.0f; GLfloat valR; GLfloat valG; GLfloat valB; if([self.myGraphManager ready]){ valR = (255.0f/colorMAX)*1.0f; valG = (255.0f/colorMAX)*1.0f; valB = (255.0f/colorMAX)*1.0f; NSEnumerator *enumerator = [[self.myGraphManager.currentCanvas graphPoints] objectEnumerator]; GraphPoint* object; while ((object = [enumerator nextObject])) { if(object.filled){ /*Enabling the following two lines gives nice smooth lines/points, but has a problem in that it degrades the background sprite.*/ glEnable (GL_BLEND); glBlendFunc (GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA); glHint (GL_LINE_SMOOTH_HINT, GL_DONT_CARE); glEnable (GL_LINE_SMOOTH); glLineWidth(1.5f); glColor4f(valR, valG, valB, 1.0); ccDrawLine(ccp(lastPointX, lastPointY), ccp(object.position.x, object.position.y)); lastPointX = object.position.x; lastPointY = object.position.y; glPointSize(3.0f); glEnable(GL_POINT_SMOOTH); glHint(GL_POINT_SMOOTH_HINT, GL_NICEST); ccDrawPoint(ccp(lastPointX, lastPointY)); } } } }

    Read the article

  • Multi-threaded .NET application blocks during file I/O when protected by Themida

    - by Erik Jensen
    As the title says I have a .NET application that is the GUI which uses multiple threads to perform separate file I/O and notice that the threads occasionally block when the application is protected by Themida. One thread is devoted to reading from serial COM port and another thread is devoted to copying files. What I experience is occasionally when the file copy thread encounters a network delay, it will block the other thread that is reading from the serial port. In addition to slow network (which can be transient), I can cause the problem to happen more frequently by making a PathFileExists call to a bad path e.g. PathFileExists("\\\\BadPath\\file.txt"); The COM port reading function will block during the call to ReadFile. This only happens when the application is protected by Themida. I have tried under WinXP, Win7, and Server 2012. In a streamlined test project, if I replace the .NET application with a MFC unmanaged application and still utilize the same threads I see no issue even when protected with Themida. I have contacted Oreans support and here is their response: The way that a .NET application is protected is very different from a native application. To protect a .NET application, we need to hook most of the file access APIs in order to "cheat" the .NET Framework that the application is protected. I guess that those special hooks (on CreateFile, ReadFile...) are delaying a bit the execution in your application and the problem appears. We did a test making those hooks as light as possible (with minimum code on them) but the problem still appeared in your application. The rest of software protectors that we tried (like Enigma, Molebox...) also use a similar hooking approach as it's the only way to make the .NET packed file to work. If those hooks are not present, the .NET Framework will abort execution as it will see that the original file was tampered (due to all Microsoft checks on .NET files) Those hooks are not present in a native application, that's why it should be working fine on your native application. Oreans support tried other software protectors such as Enigma Protector, Engima VirtualBox, and Molebox and all exhibit the exact same problem. What I have found as a work around is to separate out the file copy logic (where the file exists call is being made) to be performed in a completely separate process. I have experimented with converting the thread functions from unmanaged C++ to VB.NET equivalents (PathFileExists - System.IO.File.Exists and CreateFile/ReadFile - System.IO.Ports.SerialPort.Open/Read) and still see the same serial port read blocked when the file check or copy call is delayed. I have also tried setting the ReadFile to work asynchronously but that had no effect. I believe I am dealing with some low-level windows layer that no matter the language it exhibits a block on a shared resource -- and only when the application is executing under a single .NET process protected by Themida which evidently installs some hooks to allow .NET execution. At this time converting the entire application away from .NET is not an option. Nor is separating out the file copy logic to a separate task. I am wondering if anyone else has more knowledge of how a file operation can block another thread reading from a system port. I have included here example applications that show the problem: https://db.tt/cNMYfEIg - VB.NET https://db.tt/Y2lnTqw7 - MFC They are Visual Studio 2010 solutions. When running the themida protected exe, you can see when the FileThread counter pauses (executing the File.Exists call) while the ReadThread counter also pauses. When running non-protected visual studio output exe, the ReadThread counter does not pause which is how we expect it to function. Thanks!

    Read the article

  • Spring MVC working with Web Designers

    - by jboyd
    When working with web-designers in a Spring-MVC and JSP environment, are there any tools or helpful patterns to reduce the pain of going back and forth from HTML to JSP and back? Project requirements dictate that views will change often, and it can be difficult to make changes efficiently because of the amount of Java code that leaks into the view layer. Ideally I'd like to remove almost all Java code from the view, but it does not seem like this works with the Spring/JSP philosophy where often the way to remove Java code is to replace that code with tag libraries, which will still have a similar problem. To provide a little clarity to my question I'm going to include some existing code (inherited by me) to show the kinds of problems I'm likely to face when change the look of our views: <%-- Begin looping through results --%> <% List memberList = memberSearchResults.getResults(); for(int i = start - 1; i < memberList.size() && i < end; i++) { Profile profile = (Profile)memberList.get(i); long profileId = profile.getProfileId(); String nickname = profile.getNickname(); String description = profile.getDescription(); Image image = profile.getAvatarImage(); String avatarImageSrc = null; int avatarImageWidthNum = 0; int avatarImageHeightNum = 0; if(null != image) { avatarImageSrc = image.getSrc(); avatarImageWidthNum = image.getWidth(); avatarImageHeightNum = image.getHeight(); } String bgColor = i % 2 == 1 ? "background-color:#FFF" : ""; %> <div style="float:left;clear:both;padding:5px 0 5px 5px;cursor:pointer;<%= bgColor %>" onclick='window.location="profile.sp?profileId=<%= profileId %>"'> <div style="float:right;clear:right;padding-left:10px;width:515px;font-size:10px;color:#7e7e7e"> <h6><%= nickname %></h6> <%= description %> </div> <img style="float:left;clear:left;" alt="Avatar Image" src="<%= null != avatarImageSrc && avatarImageSrc.trim().length() > 0 ? avatarImageSrc : "images/defaultUserImage.png" %>" <%= avatarImageWidthNum < avatarImageHeightNum ? "height='59'" : "width='92'" %> /> </div> <% } // End loop %> Now, ignoring some of the code smells there, it's obvious that if someone wants to change the look of that DIV it would be neccesary to re-place all the Java/JSP code into the new HTML given (the designers don't work in JSP files, they have their own HTML versions of the website). Which is tedious, and prone to errors.

    Read the article

  • Deploying to a clustered weblogic application server or Red Hat Linux

    - by user510210
    I am developing an application with following software stack: XHTML / CSS / ExtJS / DWR / Javascript (Presentation Layer) EJB 3.0 / Spring MVC Hibernate / Hibernate Spatial My application works well in a single server development environment. But deploying to clustered weblogic environment on Red Hat does not work and results in the following exception: ============================================================================================ org.springframework.beans.factory.BeanDefinitionStoreException: Unexpected exception parsing XML document from ServletContext resource [/WEB-INF/applicationContext.xml]; nested exception is java.lang.NoSuchMethodError: Caused by: java.lang.NoSuchMethodError: at org.apache.xerces.impl.dv.xs.XSSimpleTypeDecl.applyFacets(Unknown Source) at org.apache.xerces.impl.dv.xs.XSSimpleTypeDecl.applyFacets1(Unknown Source) at org.apache.xerces.impl.dv.xs.BaseSchemaDVFactory.createBuiltInTypes(Unknown Source) at org.apache.xerces.impl.dv.xs.SchemaDVFactoryImpl.createBuiltInTypes(Unknown Source) at org.apache.xerces.impl.dv.xs.SchemaDVFactoryImpl.(Unknown Source) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at java.lang.Class.newInstance0(Class.java:355) at java.lang.Class.newInstance(Class.java:308) at org.apache.xerces.impl.dv.ObjectFactory.newInstance(Unknown Source) at org.apache.xerces.impl.dv.SchemaDVFactory.getInstance(Unknown Source) at org.apache.xerces.impl.dv.SchemaDVFactory.getInstance(Unknown Source) at org.apache.xerces.impl.xs.SchemaGrammar$BuiltinSchemaGrammar.(Unknown Source) at org.apache.xerces.impl.xs.SchemaGrammar.(Unknown Source) at org.apache.xerces.impl.xs.XMLSchemaValidator.(Unknown Source) at org.apache.xerces.parsers.XML11Configuration.configurePipeline(Unknown Source) at org.apache.xerces.parsers.XIncludeAwareParserConfiguration.configurePipeline(Unknown Source) at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source) at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source) at org.apache.xerces.parsers.XMLParser.parse(Unknown Source) at org.apache.xerces.parsers.DOMParser.parse(Unknown Source) at org.apache.xerces.jaxp.DocumentBuilderImpl.parse(Unknown Source) at org.springframework.beans.factory.xml.DefaultDocumentLoader.loadDocument(DefaultDocumentLoader.java:76) at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.doLoadBeanDefinitions(XmlBeanDefinitionReader.java:351) at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.loadBeanDefinitions(XmlBeanDefinitionReader.java:303) at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.loadBeanDefinitions(XmlBeanDefinitionReader.java:280) at org.springframework.beans.factory.support.AbstractBeanDefinitionReader.loadBeanDefinitions(AbstractBeanDefinitionReader.java:131) at org.springframework.beans.factory.support.AbstractBeanDefinitionReader.loadBeanDefinitions(AbstractBeanDefinitionReader.java:147) at org.springframework.web.context.support.XmlWebApplicationContext.loadBeanDefinitions(XmlWebApplicationContext.java:124) at org.springframework.web.context.support.XmlWebApplicationContext.loadBeanDefinitions(XmlWebApplicationContext.java:93) at org.springframework.context.support.AbstractRefreshableApplicationContext.refreshBeanFactory(AbstractRefreshableApplicationContext.java:101) at org.springframework.context.support.AbstractApplicationContext.obtainFreshBeanFactory(AbstractApplicationContext.java:390) at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:327) at org.springframework.web.context.ContextLoader.createWebApplicationContext(ContextLoader.java:244) at org.springframework.web.context.ContextLoader.initWebApplicationContext(ContextLoader.java:187) at org.springframework.web.context.ContextLoaderListener.contextInitialized(ContextLoaderListener.java:50) at weblogic.servlet.internal.EventsManager$FireContextListenerAction.run(EventsManager.java:481) at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321) at weblogic.security.service.SecurityManager.runAs(SecurityManager.java:121) at weblogic.servlet.internal.EventsManager.notifyContextCreatedEvent(EventsManager.java:181) at weblogic.servlet.internal.WebAppServletContext.preloadResources(WebAppServletContext.java:1801) at weblogic.servlet.internal.WebAppServletContext.start(WebAppServletContext.java:3042) at weblogic.servlet.internal.WebAppModule.startContexts(WebAppModule.java:1374) at weblogic.servlet.internal.WebAppModule.start(WebAppModule.java:455) at weblogic.application.internal.flow.ModuleStateDriver$3.next(ModuleStateDriver.java:205) at weblogic.application.utils.StateMachineDriver.nextState(StateMachineDriver.java:37) at weblogic.application.internal.flow.ModuleStateDriver.start(ModuleStateDriver.java:60) at weblogic.application.internal.flow.ScopedModuleDriver.start(ScopedModuleDriver.java:201) at weblogic.application.internal.flow.ModuleListenerInvoker.start(ModuleListenerInvoker.java:118) at weblogic.application.internal.flow.ModuleStateDriver$3.next(ModuleStateDriver.java:205) at weblogic.application.utils.StateMachineDriver.nextState(StateMachineDriver.java:37) at weblogic.application.internal.flow.ModuleStateDriver.start(ModuleStateDriver.java:60) at weblogic.application.internal.flow.StartModulesFlow.activate(StartModulesFlow.java:28) at weblogic.application.internal.BaseDeployment$2.next(BaseDeployment.java:630) at weblogic.application.utils.StateMachineDriver.nextState(StateMachineDriver.java:37) at weblogic.application.internal.BaseDeployment.activate(BaseDeployment.java:206) at weblogic.application.internal.EarDeployment.activate(EarDeployment.java:53) at weblogic.application.internal.DeploymentStateChecker.activate(DeploymentStateChecker.java:161) at weblogic.deploy.internal.targetserver.AppContainerInvoker.activate(AppContainerInvoker.java:79) at weblogic.deploy.internal.targetserver.BasicDeployment.activate(BasicDeployment.java:184) at weblogic.deploy.internal.targetserver.BasicDeployment.activateFromServerLifecycle(BasicDeployment.java:361) at weblogic.management.deploy.internal.DeploymentAdapter$1.doActivate(DeploymentAdapter.java:52) at weblogic.management.deploy.internal.DeploymentAdapter.activate(DeploymentAdapter.java:196) at weblogic.management.deploy.internal.AppTransition$2.transitionApp(AppTransition.java:31) at weblogic.management.deploy.internal.ConfiguredDeployments.transitionApps(ConfiguredDeployments.java:233) at weblogic.management.deploy.internal.ConfiguredDeployments.activate(ConfiguredDeployments.java:170) at weblogic.management.deploy.internal.ConfiguredDeployments.deploy(ConfiguredDeployments.java:124) at weblogic.management.deploy.internal.DeploymentServerService.resume(DeploymentServerService.java:174) at weblogic.management.deploy.internal.DeploymentServerService.start(DeploymentServerService.java:90) at weblogic.t3.srvr.SubsystemRequest.run(SubsystemRequest.java:64) at weblogic.work.ExecuteThread.execute(ExecuteThread.java:201) at weblogic.work.ExecuteThread.run(ExecuteThread.java:173) ============================================================================================ My initial thought is that there is a clash in the Xerces library being used. But I could use any feedback.

    Read the article

  • Limiting TCP sends with a "to-be-sent" queue and other design issues.

    - by Poni
    Hello all! This question is the result of two other questions I've asked in the last few days. I'm creating a new question because I think it's related to the "next step" in my understanding of how to control the flow of my send/receive, something I didn't get a full answer to yet. The other related questions are: http://stackoverflow.com/questions/3028376/an-iocp-documentation-interpretation-question-buffer-ownership-ambiguity http://stackoverflow.com/questions/3028998/non-blocking-tcp-buffer-issues In summary, I'm using Windows I/O Completion Ports. I have several threads that process notifications from the completion port. I believe the question is platform-independent and would have the same answer as if to do the same thing on a *nix, *BSD, Solaris system. So, I need to have my own flow control system. Fine. So I send send and send, a lot. How do I know when to start queueing the sends, as the receiver side is limited to X amount? Let's take an example (closest thing to my question): FTP protocol. I have two servers; One is on a 100Mb link and the other is on a 10Mb link. I order the 100Mb one to send to the other one (the 10Mb linked one) a 1GB file. It finishes with an average transfer rate of 1.25MB/s. How did the sender (the 100Mb linked one) knew when to hold the sending, so the slower one wouldn't be flooded? Another way to ask this: Can I get a "hold-your-sendings" notification from the remote side? Is it built-in in TCP or the so called "reliable network protocol" needs me to do so? Again, I have a loop with many sends to a remote server, and at some point, within that loop I'll have to determine if I should queue that send or I can pass it on to the transport layer (TCP). How do I do that? What would you do? Of course that when I get a completion notification from IOCP that the send was done I'll issue other pending sends, that's clear. Another design question related to this: Since I am to use a custom buffers with a send queue, and these buffers are being freed to be reused (thus not using the "delete" keyword) when a "send-done" notification has been arrived, I'll have to use a mutual exlusion on that buffer pool. Using a mutex slows things down, so I've been thinking; Why not have each thread have its own buffers pool, thus accessing it , at least when getting the required buffers for a send operation, will require no mutex, because it belongs to that thread only. The buffers pool is located at the thread local storage (TLS) level. No mutual pool implies no lock needed, implies faster operations BUT also implies more memory used by the app, because even if one thread already allocated 1000 buffers, the other one that is sending right now and need 1000 buffers to send something will need to allocated these to its own. This is a long question and I hope none got hurt (: Thank you all!

    Read the article

  • Database Table Schema and Aggregate Roots

    - by bretddog
    Hi, Applicaiton is single user, 1-tier(1 pc), database SqlCE. DataService layer will be (I think) : Repository returning domain objects and quering database with LinqToSql (dbml). There are obviously a lot more columns, this is simplified view. http://img573.imageshack.us/img573/3612/ss20110115171817w.png This is my first attempt of creating a 2 tables database. I think the table schema makes sense, but I need some reassurance or critics. Because the table relations looks quite scary to be honest. I'm hoping you could; Look at the table schema and respond if there are clear signs of troubles or errors that you spot right away.. And if you have time, Look at Program Summary/Questions, and see if the table layout makes makes sense to those points. Please be brutal, I will try to defend :) Program summary: a) A set of categories, each having a set of strategies (1:m) b) Each day a number of items will be produced. And each strategy MAY reference it. (So there can be 50 items, and a strategy may reference 23 of them) c) An item can be referenced by more than one strategy. So I think it's an m:m relation. d) Status values will be logged at fixed time-fractions through the day, for: - .... each Strategy.....each StrategyItem....each item e) An action on an item may be executed by a strategy that reference it. - This is logged as ItemAction (Could have called it StrategyItemAction) User Requsts b) - e) described the main activity mode of the program. To work with only today's DayLog , for each category. 2nd priority activity is retrieval of history, which typically will be From all categories, from day x to day y; Get all StrategyDailyLog. Questions First, does the overall layout look sound? I'm worried to see that there are so many relationships in all directions, connecting everything. Is this normal, or does it look like trouble? StrategyItem is made to represent an m:m relationship. Is it correct as I noted 1:m / 1:1 (marked red) ? StrategyItemTimeLog and ItemTimeLog; Logs values that both need to be retrieved together, when retreiving a StrategyItem. Reason I separated is that the first one is strategy-specific, and several strategies can reference same item. So I thought not to duplicate those values that are not dependent no strategy, but only on the item. Hence I also dragged out the LogTime, as it seems to be the only parameter to unite the logs. But this all looks quite disturbing with those 3 tables. Does it make sense at all? Or you have suggestion? Pink circles shows my vague attempt of Aggregate Root Paths. I've been thinking in terms of "what entity is responsible for delete". Though I'm unsure about the actual root. I think it's Category. Does it make sense related to User Requests described above?

    Read the article

  • Migrate Spring JPA DAO unit testing to google app engine

    - by twingocerise
    I'm trying to put together a simple environment where I can get Spring, Maven, JPA, Google App Engine and DAO unit testing working happily all together. The goal is to be able to run a simple DAO unit test creating an entity and then load it again with a simple find to check it's been created properly - all of this from my maven build. My dao is making use of the JPA entity manager (query(), persist(), etc.) I've got it working no problem with hsqldb and a datasource, etc. but I'm struggling to get it working with appengine. My questions are: 1) I'm using an entity manager, injecting my persistence unit as followed. Is it OK? Is there any need for a datasource or something special? I thought not but correct me if I'm wrong. applicationContext.xml <bean id='entityManagerFactory' class='org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean'> <property name="persistenceUnitName" value="transactions-optional" /> </bean> Persistence.xml <persistence-unit name="transactions-optional"> <provider>org.datanucleus.store.appengine.jpa.DatastorePersistenceProvider</provider> <properties> <property name="datanucleus.NontransactionalRead" value="true"/> <property name="datanucleus.NontransactionalWrite" value="true"/> <property name="datanucleus.ConnectionURL" value="appengine"/> </properties> </persistence-unit> 2) what are the dependencies I need to add to my pom file to be able to run the unit test making use of the entityManager? What about versions ? I found loads of things about appengine-api-labs/stubs/testing but none them got it working i.e. I'm getting jdo dependency missing while I'm using JPA... I also get loads of conflicts when I try to add some jars (datanucleus and stuff). So far I'm trying appengine-api-1.0-sdk v1.7.0 - ASM-all v3.3 - datanucleus core/api-jpa/enhancer v3.1.0 - datanucleus-appengine v2.0.1.1 and all the gae testing jars v1.7.0 3) Is there anything I need to add to my surefire plugin (test runner) to make sure it picks up all the dependencies? I'm getting an exhausting ClassNotFound on DatastorePersistenceProvider while it is in my classpath (I checked the jars and the mvn dependency:tree) I had a look at this but it doesn't seem to be working at all: http://www.vertigrated.com/blog/2011/02/working-maven-3-google-app-engine-plugin-with-gwt-support/ 4) Do I need to use any sot of localhelper to test my DAOs? Ideally I'd want to test my dao layer "as is" with the entity manager... what's your opinion ? Has anyone managed to run a unit test using JPA on google app engine ? 5) Do I need to set up any sort of gae.home somewhere in my pom file? Would anyone make use of it (a plugin or something) ? 6) Is the gwt-maven plugin any helpful if I don't use gwt - I'm writing a simple webservice making use of appengine, not a GWT app... Any help would be much appreciated as I've been struggling for 2 days now... Cheers, V.

    Read the article

< Previous Page | 125 126 127 128 129 130 131 132 133 134 135 136  | Next Page >