Search Results

Search found 4142 results on 166 pages for 'lost hobbit'.

Page 157/166 | < Previous Page | 153 154 155 156 157 158 159 160 161 162 163 164  | Next Page >

  • Understanding Windows 8 Recovery options

    - by stuffe
    Background: I am preparing a PC that I am sending to a relative abroad, who has little or no internet access, and next to no sensible options for getting IT support should anything go wrong. As such I am trying to provide a full set of recovery options such that they are able to reinstall the OS with minimum fuss or assistance if required. The PC is a brand new Acer laptop that came with Windows 7 pre-installed (and an associated recovery partition) and a free upgrade to Windows 8. I have installed Windows 8 from scratch performing a format and clean install from media I burned from the official download. The existing Windows 7 recovery partition is still there, and I can still boot from it. I have created recovery DVDs of that in case it is ever lost. Here are my recovery options so far. I can perform a factory reset of Win 7 via the recovery partition I can perform a factory reset of Win 7 via burned recovery DVDs I can re-install Windows 8 cleanly from a DVD All of these are useful, but not what I want, because the first 2 methods use Win 7, and still fill the machine with crapware, and the latter doesn't provide for any post-install customisation and software installation. So, I am looking to see what other options are available to perform a Windows 8 recovery that will be more than a simple install. I am aware that Win8 comes with some useful refresh tools: Refresh your PC - Re-install Win 8 over the top of your existing installation, recovering from any Windows corruption etc. I can run this from my current install, although it says some files are missing that will be provided by me install or recovery media, which seems to be code for stick your install DVD in, and it starts after I do that - unfortunately for this particular laptop you need to specify a particular WIFI driver or the install bombs out part way through with IRQL errors, and this refresh method skips the part where you can load a driver, so it's no use to me. I think I can fix this by creating a custom recovery image using the recimg.exe command but it takes hours to complete so I haven't tried it yet. Reset your PC - Perform a full install and lose all your files. Again it needs my Install media inserting before it will do anything, but then it provides an error (will include later when I recreate it...) Now, these recovery options look useful (in principal, although both are fail for me) but they rely on having a working system to access the tools, which leads me to the last option, of making a Recovery USB drive. I have made a recovery drive, and it should perform loads of useful things, including copying my WIN7 recovery partition to the drive, providing the above refresh and reset options, providing other troubleshooting options and also the ability to restore from a custom image, only none of them seem to work for me. Creating the Recovery Drive - the option to include my recovery partition is greyed out. The partition exists and works fine, why will it not copy it? Refresh - I imagine this would have the same issues as I described before, but this is moot because when I try it says that the "drive where Windows is installed is locked, please unlock the drive and try again" with no info on what that means and how to do it. Restore - Again, probably pointless as I can just use the DVD, but it also errors: "unable to reset your PC. A required drive partition is missing" System Restore - should let me roll back a bad driver etc as per normal in Windows, only it simply says "To use system restore you must specify which windows installation to restore. Restart this computer, select an operating system, and select system restore" ?!?! System Image Recovery - this seems to be offering to restore from a Windows system image, but this is deprecated in Windows 8, although you can still make one if you use the Windows 7 Backup tools, however the resultant file is too large to put on the USB stick as it's FAT formatted, and would be a massive stack of DVDs anyway. So useless. It would be nice it it would work with the custom recovery image you can use with the refresh command, but there seems no option to do this. Automatic Repair - some diagnostics, which seem useless as it happily tells me it can't fix my problem, even though I have none. Command Prompt - yay, this works! What on earth do I want to use it for... Had any of the above worked, it might be useful, but as any form of install still requires you to have the DVD, and any form of custom recovery image also requires you to have either a massive stack of DVDs or an NTFS formatted backup device in addition to the recovery drive, it sort of ruins the point. It doesn't seem rocket science. I want to create a bootable USB drive that I can refresh Windows over an existing install with, perform a clean reinstall to a bare system, or recovery a customised image with existing apps installed. If anyone can point me in a direction that allows me to make a single recovery drive do these all these things, I would appreciate it. I have a 32Gb USB3 thumb drive that I bought for this very purpose, but it's seems to be fighting to let me do anything useful. At this rate I will be making a DriveImageXML recovery stick and dumping the OS with that, which I know works, but isn't so elegant as using the proper tools..

    Read the article

  • Rails & combo box change event: Help make this obtrusive javascript unobtrusive

    - by DJTripleThreat
    Ok so a friend of mine gave some help with a prototype/obtrusive solution to this but its not quite there. Also, I want to make this unobtrusive instead of using the observe_field function that rails gives me. I don't want to use prototype either because I'm more familiar with JQuery. Here's my problem: I have an Event that can have multiple ServiceTypes and a ServiceType can belong to many Events. A many-to-many relationship between these two exists as an OfferedService. When creating an event, I have a drop down with a list of TimeAllotments that are something like 10 minutes, 12 minutes, 15 minutes, 20 minutes, 30 minutes etc. When the user selects one of these choices, I want a div tag to be filled with a list of ServiceTypes that are associated with this TimeAllotment. So for example, the user selects "10 minutes" and then the div repopulates with services that last 10 minutes. Here is what I have so far: ... some erb code etc and then <fieldset> <legend><%= f.label :time_allotment, "Size of the Appointment Slots:" %></legend> <div> <span class="field-group"> <div> <!-- TimeAllotment is a tabless model which is why this is done like so... --> <%= select("event_service", "time_allotment", TimeAllotment.all.collect {|ta| [ta.title, ta.value]}, {:prompt => true}) %> </div> </span> </div> <div style="clear:both;"></div> Services: <div> <span class="field-group"> <!-- this div right here needs to be repopulated when the above select changes. --> <div id="services"> <% for service_type in ServiceType.all %> <div> <%= check_box_tag "event_service[service_type_ids][]", service_type.id, false %> <%=h service_type.title %> </div> <% end %> </div> </span> </div> <div class="clear"></div> </fieldset> ok so right now ALL of the services are there to be chosen from. I want them to change based on what is selected in the combobox event_service_time_allotment. Can someone help me get pointed in the right direction? I have looked at Ryan's rails casts for using JQuery but its not helpful because he deals with ajax calls for the create action. This would be for the new or edit action. I have a new.js.erb but it doesn't get loaded when calling the new action. I'm super lost as far as getting JQuery to work with my application. I think that if someone can just show me how to make an alert pop up when I change the combo box, and how to return a dataset using ajax the right now, I think I can figure out the rest. Thanks, I know this is super complicated so any helpful answers will get an upvote.

    Read the article

  • KVM machine does not start ssh, network is started, used to work

    - by lleto
    have been searching an pulling my hear out for the last 6 hours. I have a virtual machine that has been running fine for the last six months. I was happy ssh'ing into it and it was running a database and some small apps. Tonight ssh stopped working, so I decided to reboot the machine. I now have the following situation: virsh list --all states machine as running I can ping the machine and get a reply When I ssh to the machine I see "ssh: connect to host [myserver] port 22: Connection refused" nmap does not show port 22 as open I have tried to: - reboot the machine once more (no luck) - mount the filesystem and check /etc/ssh/sshd.conf (has not changed since working situation) - install virsh console, however this does not seem to work When I mount the fs directly using losetup the strange thing is that file dates seem to be frozen in /var/log/ around the time of the crash. If I look in /var/run/ I can see an sshd.pid, but the time is 6 hours ago (and numerous reboots). My virsh xml looks like this: <domain type='kvm' id='21'> <name>myserver</name> <uuid>09678c8d-a99b-1d18-a7af-88d027cc8f93</uuid> <memory>1048576</memory> <currentMemory>1048576</currentMemory> <vcpu>1</vcpu> <os> <type arch='x86_64' machine='pc-1.0'>hvm</type> <boot dev='hd'/> </os> <features> <acpi/> </features> <clock offset='utc'/> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>destroy</on_crash> <devices> <emulator>/usr/bin/kvm</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw'/> <source file='/dev/disk01/myserver'/> <target dev='hda' bus='ide'/> <alias name='ide0-0-0'/> <address type='drive' controller='0' bus='0' unit='0'/> </disk> <controller type='ide' index='0'> <alias name='ide0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <interface type='bridge'> <mac address='52:54:00:e3:13:86'/> <source bridge='br0'/> <target dev='vnet0'/> <model type='virtio'/> <alias name='net0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </interface> <serial type='pty'> <source path='/dev/pts/1'/> <target port='0'/> <alias name='serial0'/> </serial> <console type='pty' tty='/dev/pts/1'> <source path='/dev/pts/1'/> <target type='serial' port='0'/> <alias name='serial0'/> </console> <input type='mouse' bus='ps2'/> <graphics type='vnc' port='5900' autoport='yes' listen='127.0.0.1'> <listen type='address' address='127.0.0.1'/> </graphics> <video> <model type='cirrus' vram='9216' heads='1'/> <alias name='video0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </video> <memballoon model='virtio'> <alias name='balloon0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </memballoon> </devices> <seclabel type='dynamic' model='apparmor' relabel='yes'> <label>libvirt-09678c8d-a99b-1d18-a7af-88d027cc8f93</label> <imagelabel>libvirt-09678c8d-a99b-1d18-a7af-88d027cc8f93</imagelabel> </seclabel> </domain> I'm sort of lost as to where I can look to get the machine up and running again. On the same instance of kvm I have another server running which is working fine. Both are Ubuntu 12.04. All help is welcome....

    Read the article

  • How do I go about overloading C++ operators to allow for chaining?

    - by fneep
    I, like so many programmers before me, am tearing my hair out writing the right-of-passage-matrix-class-in-C++. I have never done very serious operator overloading and this is causing issues. Essentially, by stepping through This is what I call to cause the problems. cMatrix Kev = CT::cMatrix::GetUnitMatrix(4, true); Kev *= 4.0f; cMatrix Baz = Kev; Kev = Kev+Baz; //HERE! What seems to be happening according to the debugger is that Kev and Baz are added but then the value is lost and when it comes to reassigning to Kev, the memory is just its default dodgy values. How do I overload my operators to allow for this statement? My (stripped down) code is below. //header class cMatrix { private: float* _internal; UInt32 _r; UInt32 _c; bool _zeroindexed; //fast, assumes zero index, no safety checks float cMatrix::_getelement(UInt32 r, UInt32 c) { return _internal[(r*this->_c)+c]; } void cMatrix::_setelement(UInt32 r, UInt32 c, float Value) { _internal[(r*this->_c)+c] = Value; } public: cMatrix(UInt32 r, UInt32 c, bool IsZeroIndexed); cMatrix( cMatrix& m); ~cMatrix(void); //operators cMatrix& operator + (cMatrix m); cMatrix& operator += (cMatrix m); cMatrix& operator = (const cMatrix &m); }; //stripped source file cMatrix::cMatrix(cMatrix& m) { _r = m._r; _c = m._c; _zeroindexed = m._zeroindexed; _internal = new float[_r*_c]; UInt32 size = GetElementCount(); for (UInt32 i = 0; i < size; i++) { _internal[i] = m._internal[i]; } } cMatrix::~cMatrix(void) { delete[] _internal; } cMatrix& cMatrix::operator+(cMatrix m) { return cMatrix(*this) += m; } cMatrix& cMatrix::operator*(float f) { return cMatrix(*this) *= f; } cMatrix& cMatrix::operator*=(float f) { UInt32 size = GetElementCount(); for (UInt32 i = 0; i < size; i++) { _internal[i] *= f; } return *this; } cMatrix& cMatrix::operator+=(cMatrix m) { if (_c != m._c || _r != m._r) { throw new cCTException("Cannot add two matrix classes of different sizes."); } if (!(_zeroindexed && m._zeroindexed)) { throw new cCTException("Zero-Indexed mismatch."); } for (UInt32 row = 0; row < _r; row++) { for (UInt32 column = 0; column < _c; column++) { float Current = _getelement(row, column) + m._getelement(row, column); _setelement(row, column, Current); } } return *this; } cMatrix& cMatrix::operator=(const cMatrix &m) { if (this != &m) { _r = m._r; _c = m._c; _zeroindexed = m._zeroindexed; delete[] _internal; _internal = new float[_r*_c]; UInt32 size = GetElementCount(); for (UInt32 i = 0; i < size; i++) { _internal[i] = m._internal[i]; } } return *this; }

    Read the article

  • Arch Linux with an nginx/django setup refuses to display ANYTHING

    - by Holland
    I'm on Amazon Ec2, with an Arch Linux server. While I truly am loving it, I'm having the issue of actually getting nginx to display anything. Everytime I try to throw my hostname into the browser, the browser states that it's not available for some reason - almost as if the host doesn't even exist. One thing I'd like to know is, how can I get this up and running? Is there a specific arch linux configuration I have to do to make it web accessible? I have port 80 open, as well as port 22. I've tried using gunicorn, python-flup, and nginx. Nginx Config user http; worker_processes 1; #error_log logs/error.log; #error_log logs/error.log notice; #error_log logs/error.log info; pid logs/nginx.pid; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log logs/access.log main; sendfile on; #tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; #gzip on; server { listen 80; server_name _; access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; #charset koi8-r; location ^~ /media/ { root /path/to/media; } location ^~ /admin-media/ { root /usr/lib/python2.7/site-packages/django/contrib/admin/media; } location / { root /path/to/root/; fastcgi_pass 127.0.0.1:8080; fastcgi_param SERVER_NAME $server_name; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param PATH_INFO $fastcgi_script_name; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param QUERY_STRING $query_string; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_pass_header Authorization; fastcgi_intercept_errors off; fastcgi_index index.html; index index.htm index.html; } error_page 500 502 503 504 /50x.html; location = /50x.html { root /etc/nginx/html/50x.html; } } # server { # listen 80; # server_name localhost; #charset koi8-r; #access_log logs/host.access.log main; # location / { # root html; # index index.html index.htm; # } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # #error_page 500 502 503 504 /50x.html; #location = /50x.html { root html; #} # proxy the PHP scripts to Apache listening on 127.0.0.1:80 # #location ~ \.php$ { # proxy_pass http://127.0.0.1; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # #location ~ \.php$ { # root html; # fastcgi_pass 127.0.0.1:9000; # fastcgi_index index.php; # fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name; # include fastcgi_params; #} # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} #} # another virtual host using mix of IP-, name-, and port-based configuration # #server { # listen 8000; # listen somename:8080; # server_name somename alias another.alias; # location / { # root html; # index index.html index.htm; # } #} # HTTPS server # #server { # listen 443; # server_name localhost; # ssl on; # ssl_certificate cert.pem; # ssl_certificate_key cert.key; # ssl_session_timeout 5m; # ssl_protocols SSLv2 SSLv3 TLSv1; # ssl_ciphers HIGH:!aNULL:!MD5; # ssl_prefer_server_ciphers on; # location / { # root html; # index index.html index.htm; # } #} } I can't quite tell if it's a server issue or a configuration issue: I've followed so many guides now I can't even count them all. The thing is that Django itself is working fine, and my permissions to the document root of the where the site files are stored is 777. Ontop of that, I have a git repo which works perfectly fine, and django, python, and runfcgi all start without issues. The same goes for gunicorn, when I do a gunicorn_django -b 0.0.0.0:8000 in my document root. Here is my output from that: 2012-04-15 05:17:37 [3124] [INFO] Starting gunicorn 0.14.2 2012-04-15 05:17:37 [3124] [INFO] Listening at: http://0.0.0.0:8081 (3124) 2012-04-15 05:17:37 [3124] [INFO] Using worker: sync 2012-04-15 05:17:37 [3127] [INFO] Booting worker with pid: 3127 As far as I know, everything seems fine, as well as error.log and access.log for nginx. The access log is completely blank, for that matter. I just feel lost here; what would be a step in the right direction to bebugging an issue such as this?

    Read the article

  • How can I get the output of a command terminated by a alarm() call in Perl?

    - by rockyurock
    Case 1 If I run below command i.e iperf in UL only, then i am able to capture the o/p in txt file @output = readpipe("iperf.exe -u -c 127.0.0.1 -p 5001 -b 3600k -t 10 -i 1"); open FILE, ">Misplay_DL.txt" or die $!; print FILE @output; close FILE; Case 2 When I run iperf in DL mode , as we know server will start listening in cont. mode like below even after getting data from client (Here i am using server and client on LAN) @output = system("iperf.exe -u -s -p 5001 -i 1"); on server side: D:\_IOT_SESSION_RELATED\SEEM_ELEMESNTS_AT_COMM_PORT_CONF\Tput_Related_Tools\AUTO MATION_APP_\AUTOMATION_UTILITYiperf.exe -u -s -p 5001 ------------------------------------------------------------ Server listening on UDP port 5001 Receiving 1470 byte datagrams UDP buffer size: 8.00 KByte (default) ------------------------------------------------------------ [1896] local 192.168.5.101 port 5001 connected with 192.168.5.101 port 4878 [ ID] Interval Transfer Bandwidth Jitter Lost/Total Datagrams [1896] 0.0- 2.0 sec 881 KBytes 3.58 Mbits/sec 0.000 ms 0/ 614 (0%) command prompt does not appear , process is contd... on client side: D:\_IOT_SESSION_RELATED\SEEM_ELEMESNTS_AT_COMM_PORT_CONF\Tput_Related_Tools\AUTO MATION_APP_\AUTOMATION_UTILITYiperf.exe -u -c 192.168.5.101 -p 5001 -b 3600k -t 2 -i 1 ------------------------------------------------------------ Client connecting to 192.168.5.101, UDP port 5001 Sending 1470 byte datagrams UDP buffer size: 8.00 KByte (default) ------------------------------------------------------------ [1880] local 192.168.5.101 port 4878 connected with 192.168.5.101 port 5001 [ ID] Interval Transfer Bandwidth [1880] 0.0- 1.0 sec 441 KBytes 3.61 Mbits/sec [1880] 1.0- 2.0 sec 439 KBytes 3.60 Mbits/sec [1880] 0.0- 2.0 sec 881 KBytes 3.58 Mbits/sec [1880] Server Report: [1880] 0.0- 2.0 sec 881 KBytes 3.58 Mbits/sec 0.000 ms 0/ 614 (0%) [1880] Sent 614 datagrams D:\_IOT_SESSION_RELATED\SEEM_ELEMESNTS_AT_COMM_PORT_CONF\Tput_Related_Tools\AUTO MATION_APP_\AUTOMATION_UTILITY so with this as server is cont. listening and never terminates so can't take output of server side to a txt file as it is going to the next command itself to create a txt file so i adopted the alarm() function to terminate the server side (iperf.exe -u -s -p 5001) commands after it received all data from the client. could anybody suggest me the way.. Here is my code: #! /usr/bin/perl -w my $command = "iperf.exe -u -s -p 5001"; my @output; eval { local $SIG{ALRM} = sub { die "Timeout\n" }; alarm 20; #@output = `$command`; #my @output = readpipe("iperf.exe -u -s -p 5001"); #my @output = exec("iperf.exe -u -s -p 5001"); my @output = system("iperf.exe -u -s -p 5001"); alarm 0; }; if ($@) { warn "$command timed out.\n"; } else { print "$command successful. Output was:\n", @output; } open FILE, ">display.txt" or die $!; print FILE @output_1; close FILE; i know that with system command i cannot capture the o/p to a txt file but i tried with readpipe() and exec() calls also but in vain... could some one please take a look and let me know why the iperf.exe -u -s -p 5001 is not terminating even after the alarm call and to take the out put to a txt file

    Read the article

  • Problems with show hid jquery

    - by Michael
    I am someone can help... This should be easy but I am lost. <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <title>Show Hide Sample</title> <script src="js/jquery.js" type="text/javascript"></script> <script type="text/javascript"> $(document).ready(function(){ $('#content1').hide(); $('a').click(function(){ $('#content1').show('slow'); }); $('a#close').click(function(){ $('#content1').hide('slow'); }) }); </script> <style> body{font-size:12px; font-family:"Trebuchet MS"; background: #CCF} #content1{ border:1px solid #DDDDDD; padding:10px; margin-top:5px; width:300px; } </style> </head> <body> <a href="#" id="click">Test 1</a> <div class="box"> <div id="content1"> <h1 align="center">Hide 1</h1> <P> Lorem ipsum dolor sit amet, consectetuer adipiscing elit. Nullam pulvinar, enim ac hendrerit mattis, lorem mauris vestibulum tellus, nec porttitor diam nunc tempor dui. Aenean orci. Sed tempor diam eget tortor. Maecenas quis lorem. Nullam semper. Fusce adipiscing tellus non enim volutpat malesuada. Cras urna. Vivamus massa metus, tempus et, fermentum et, aliquet accumsan, lectus. Maecenas iaculis elit eget ipsum cursus lacinia. Mauris pulvinar.</p> <p><a href="#" id="close">Close</a></p> </div> </div> <a href="#" id="click">Test 2</a> <div class="box"> <div id="content1"> <h1 align="center">Hide 2</h1> <p> Lorem ipsum dolor sit amet, consectetuer adipiscing elit. Nullam pulvinar, enim ac hendrerit mattis, lorem mauris vestibulum tellus, nec porttitor diam nunc tempor dui. Aenean orci. Sed tempor diam eget tortor. Maecenas quis lorem. Nullam semper. Fusce adipiscing tellus non enim volutpat malesuada. Cras urna. Vivamus massa metus, tempus et, fermentum et, aliquet accumsan, lectus. Maecenas iaculis elit eget ipsum cursus lacinia. Mauris pulvinar.</p> <p><a href="#" id="close">Close</a></p> </div> </div> </body> </html>

    Read the article

  • Windows 7 Machine Makes Router Drop -All- Wireless Connections [closed]

    - by Hammer Bro.
    Note: I accidentally originally posted this question over at SuperUser, and I still think the issue is caused by some low-level networking practice of Windows 7, but I think the expertise here would be more apt to figuring it out. Apologies for the cross-post. Some background: My home network consists of my Desktop, a two-month old Windows 7 (x64) machine which is online most frequently (N-spec), as well as three other Windows XP laptops (all G) that only connect every now and then (one for work, one for Netflix, and the other for infrequent regular laptop uses). I used to have a Belkin F5D8236-4 wireless router, and everything worked great. A week ago, however, I found out that the Belkin absolutely in no way would establish a VPN connection, something that has become important for work. So I bought a Netgear WNR3500v2/U/L. The wireless was acting a little sketchy at first for just the Windows 7 machine, but I thought it had something to do with 802.11N and I was in a hurry so I just fished up an ethernet cable and disabled the computer's wireless. It has now become apparent, though, that whenever the Windows 7 machine is connected to the router, all wireless connections become unstable. I was using my work laptop for a solid six hours today with no trouble, having multiple SSH connections open over VPN and streaming internet radio in the background. Then, within two minutes of turning on this Windows 7 box, I had lost all connectivity over the wireless. And I was two feet away from the router. The same sort of thing happens on all of the other laptops -- Netflix can be playing stuff all weekend, but if I come up here and do things on this (W7) computer, the streaming will be dead within ten minutes. So here are my basic observations: If the Windows 7 machine is off, then all connections will have a Signal Strength of Very Good or Excellent and a Speed of 48-54 Mbps for an indefinite amount of time. Shortly after the Windows 7 machine is turned on, all wireless connections will experience a consistent decline in Speed down to 1.0 Mbps, eventually losing their connection entirely. These machines will continue to maintain 70% signal strength, as observed by themselves and router. Once dropped, a wireless connection will have difficulty reconnecting. And, if a connection manages to become established, it will quickly drop off again. The Windows 7 machine itself will continue to function just fine if it's using a wired connection, although it will experience these same issues over the wireless. All of the drivers and firmwares are up to date, and this happened both with the stock Netgear firmware as well as the (current) DD-WRT. What I've tried: Making sure each computer is being assigned a distinct IP. (They are.) Disabling UPnP and Stateful Packet Inspection on the router. Disabling Network Sharing, SSDP Discovery, TCP/IP NetBios Helper and Computer Browser services on the Windows 7 machine. Disabling QoS Packet Scheduler, IPv6, and Link Layer Topology Discovery options on my ethernet controller (leaving only Client for Microsoft Networks, File and Printer Sharing, and IPv4 enabled). What I think: It seems awfully similar to the problems discussed in detail at http://social.msdn.microsoft.com/Forums/en/wsk/thread/1064e397-9d9b-4ae2-bc8e-c8798e591915 (which was both the most relevant and concrete information I could dig up on the internet). I still think that something the Windows 7 IP stack (or just Operating System itself) is doing is giving the router fits. However, I could be wrong, because I have two key differences. One is that most instances of this problem are reported as the entire router dying or restarting, and mine still works just fine over the wired connection. The other is that it's a new router, tested with both the factory firmware and the (I assume) well-maintained DD-WRT project. Even if Windows 7 is still secretly sending IPv6 packets or the TCP Window Scaling implementation that I hear Vista caused some trouble with (even though I've tried my best to disable anything fancy), this router should support those functions. I don't want to get a new or a replacement router unless someone can convince me that this is a defective unit. But the problem seems too specific and predictable by my instincts to be a hardware hiccup. And I don't want to deal with the inevitable problems that always seem to take half a day to resolve when getting a new router, since I'm frantically working (including tomorrow) to complete a project by next week's deadline. Plus, I think in the worst case scenario, I could keep this router connected directly to the modem, disable its wireless entirely, and connect the old Belkin to it directly. That should allow me to still use VPN (although I'll have to plug my work laptop directly into that router), and then maintain wireless connections for all of the other computers. But that feels so wrong to me. Anyone have any ideas what the cause and possible solution could be? Clarifications: The Windows 7 machine is directly connected via an ethernet cable to the router for everything above. But while it is online, all other computers' wireless connections become unusable. It is not an issue of signal strength or interference -- no other devices within scanning range are using Channel 1, and the problem will affect computers that are literally feet away from the router with 95% signal strength.

    Read the article

  • A format for storing personal contacts in a database

    - by Gart
    I'm thinking of the best way to store personal contacts in a database for a business application. The traditional and straightforward approach would be to create a table with columns for each element, i.e. Name, Telephone Number, Job title, Address, etc... However, there are known industry standards for this kind of data, like for example vCard, or hCard, or vCard-RDF/XML or even Windows Contacts XML Schema. Utilizing an standard format would offer some benefits, like inter-operablilty with other systems. But how can I decide which method to use? The requirements are mainly to store the data. Search and ordering queries are highly unlikely but possible. The volume of the data is 100,000 records at maximum. My database engine supports native XML columns. I have been thinking to use some XML-based format to store the personal contacts. Then it will be possible to utilize XML indexes on this data, if searching and ordering is needed. Is this a good approach? Which contacts format and schema would you recommend for this? Edited after first answers Here is why I think the straightforward approach is bad. This is due to the nature of this kind of data - it is not that simple. The personal contacts it is not well-structured data, it may be called semi-structured. Each contact may have different data fields, maybe even such fields which I cannot anticipate. In my opinion, each piece of this data should be treated as important information, i.e. no piece of data could be discarded just because there was no relevant column in the database. If we took it further, assuming that no data may be lost, then we could create a big text column named Comment or Description or Other and put there everything which cannot be fitted well into table columns. But then again - the data would lose structure - this might be bad. If we wanted structured data then - according to the database design principles - the data should be decomposed into entities, and relations should be established between the entities. But this adds complexity - there are just too many entities, and lots of design desicions should be made, like "How do we store Address? Personal Name? Phone number? How do we encode home phone numbers and mobile phone numbers? How about other contact info?.." The relations between entities are complex and multiple, and each relation is a table in the database. Each relation needs to be documented in the design papers. That is a lot of work to do. But it is possible to avoid the complexity entirely - just document that the data is stored according to such and such standard schema, period. Then anybody who would be reading that document should easily understand what it was all about. Finally, this is all about using an industry standard. The standard is, hopefully, designed by some clever people who anticipated and described the structure of personal contacts information much better than I ever could. Why should we all reinvent the wheel?? It's much easier to use a standard schema. The problem is, there are just too many standards - it's not easy to decide which one to use!

    Read the article

  • How to create Large resumable download from a secured location .NET

    - by Kelvin H
    I need to preface I'm not a .NET coder at all, but to get partial functionality, I modified a technet chunkedfilefetch.aspx script that uses chunked Data Reading and writing Streamed method of doing file transfer, to get me half-way. iStream = New System.IO.FileStream(path, System.IO.FileMode.Open, _ IO.FileAccess.Read, IO.FileShare.Read) dataToRead = iStream.Length Response.ContentType = "application/octet-stream" Response.AddHeader("Content-Length", file.Length.ToString()) Response.AddHeader("Content-Disposition", "attachment; filename=" & filedownload) ' Read and send the file 16,000 bytes at a time. ' While dataToRead 0 If Response.IsClientConnected Then length = iStream.Read(buffer, 0, 16000) Response.OutputStream.Write(buffer, 0, length) Response.Flush() ReDim buffer(16000) ' Clear the buffer ' dataToRead = dataToRead - length Else ' Prevent infinite loop if user disconnects ' dataToRead = -1 End If End While This works great on files up to 2GB and is fully functioning now.. But only one problem it doesn't allow for resume. I took the original code called it fetch.aspx and pass an orderNUM through the URL. fetch.aspx&ordernum=xxxxxxx It then reads the filename/location from the database occording to the ordernumber, and chunks it out from a secured location NOT under the webroot. I need a way to make this resumable, by the nature of the internet and large files people always get disconnected and would like to resume where they left off. But any resumable articles i've read, assume the file is within the webroot.. ie. http://www.devx.com/dotnet/Article/22533/1954 Great article and works well, but I need to stream from a secured location. I'm not a .NET coder at all, at best i can do a bit of coldfusion, if anyone could help me modify a handler to do this, i would really appreciate it. Requirements: I Have a working fetch.aspx script that functions well and uses the above code snippet as a base for the streamed downloading. Download files are large 600MB and are stored in a secured location outside of the webroot. Users click on the fetch.aspx to start the download, and would therefore be clicking it again if it was to fail. If the ext is a .ASPX and the file being sent is a AVI, clicking on it would completely bypass an IHTTP handler mapped to .AVI ext, so this confuses me From what I understand the browser will read and match etag value and file modified date to determine they are talking about the same file, then a subsequent accept-range is exchanged between the browser and IIS. Since this dialog happens with IIS, we need to use a handler to intercept and respond accordingly, but clicking on the link would send it to an ASPX file which the handeler needs to be on an AVI fiel.. Also confusing me. If there is a way to request the initial HTTP request header containing etag, accept-range into the normal .ASPX file, i could read those values and if the accept-range and etag exist, start chunking at that byte value somehow? but I couldn't find a way to transfer the http request headers since they seem to get lost at the IIS level. OrderNum which is passed in the URL string is unique and could be used as the ETag Response.AddHeader("ETag", request("ordernum")) Files need to be resumable and chunked out due to size. File extensions are .AVI so a handler could be written around it. IIS 6.0 Web Server Any help would really be appreciated, i've been reading and reading and downloading code, but none of the examples given meet my situation with the original file being streamed from outside of the webroot. Please help me get a handle on these httphandlers :)

    Read the article

  • Both tab & hover triggered popups problem

    - by carpenter
    I am trying to display divs when hovering over thumb-nails and/or both when tabbing onto them. If I stick to my mouse, the popups seem to work OK - if I start with a tab press I can show the popops also (foward only - no shift + tab yet). Any help getting them to play well together? <script type="text/javascript"> // Note: the below is being run from an onmouseover on a asp:HyperLink at the moment function onhovering_and_tabbingon2() { var active_hover = 0; var num_of_thumb; // set the default focus onto the first thumb-nail and make its popup display document.getElementById('link_no' + active_hover).focus(); // set focus on the first thumb $('#pop' + active_hover).toggleClass('popup'); // show its popup as it is hidden // for when hovering over the thumbs $(".box img").hover( // so as to effect only images/thumb-nails within divs of class=box when hovering over them function () { // test for if the image is a thumb-entry and not a popup image - of class=thumbs2 thumb = $(this).attr('class'); if (thumb != "thumbs2") { // I need to add/toggle the class here to a "div" and not to the image being hovered on, a div with text that corrosponds to the hovered on image though // so grab the number of the thumb_entry - to use to id the div. num_of_thumb = $(this).attr('id').replace('thumb_entry_No', ''); // find the div with id 'pop' + num_of_thumb, and toggleClass on it $('#pop' + num_of_thumb).toggleClass('popup'); // shows the hovered on pic's popup // move the focus to the hovered on pic's a tag ?????? document.getElementById('link_no' + num_of_thumb).focus(); // if the previous popup that was showing was in box2.. if (active_hover == 1 || active_hover% 2 == 1) { $('#pop' + active_hover).toggleClass('popup4_line2'); } else { // remove/toggle the previous active popup's visibility $('#pop' + active_hover).toggleClass('popup'); } // set the new active_hover to num_of_thumb active_hover = num_of_thumb; } }, function () { } ); // same thing again - but for my second row/line of entries/thumb-nails... $(".box2 img").hover( // so as to effect only images/thumbs within divs of class=box2 function () { // test if the image is a thumb-entry and not a popup image thumb = $(this).attr('class'); if (thumb != "thumbs2") { // I need to add the class here to a "div" and not to the image being hovered on, a div that corrosponds to the hovered on image though // so grab the number of the thumb_entry being hovered on, so as to id the div. num_of_thumb = $(this).attr('id').replace('thumb_entry_No', ''); // find the div with id='pop' + num_of_thumb, and toggleClass on it $('#pop' + num_of_thumb).toggleClass('popup4_line2'); // move the focus to the hovered on pic's a tag ?? document.getElementById('link_no' + num_of_thumb).focus(); // if the previous popup that was showing was in box.. // or if the active_hover is even (modulus) if (active_hover == 0 || active_hover % 2 == 0) { $('#pop' + active_hover).toggleClass('popup'); } else { // remove the previous active visible popup $('#pop' + active_hover).toggleClass('popup4_line2'); } // set the new active_hover to num_of_thumb active_hover = num_of_thumb; } }, function () { } ); // todo: I would like to try to show the popups when tabbing through the thumb-nails also // but am lost... document.onkeyup = keypress; // ???? function keypress() { // alert("The key pressed was: " + window.event.keyCode); if (window.event.keyCode == "9") { //alert("The tab key was pressed!"); active_hover = active_hover + 1; // for tabbing into box 2 (odd numbers) if (active_hover == 1 || active_hover % 2 == 1) { // toggle visibility of previous popup $('#pop' + (active_hover - 1)).toggleClass('popup'); // toggle visibility of current popup $('#pop' + active_hover).toggleClass('popup4_line2'); // } else { // for tabbing into box from box2 // toggle visibility of previous popup $('#pop' + (active_hover - 1)).toggleClass('popup4_line2'); // toggle visibility of current popup $('#pop' + active_hover).toggleClass('popup'); // } // ?????? // // if (window.event.keyCode == "shift&9") { } } } } </script>

    Read the article

  • Xen DomU on DRBD device: barrier errors

    - by Halfgaar
    I'm testing setting up a Xen DomU with a DRBD storage for easy failover. Most of the time, immediatly after booting the DomU, I get an IO error: [ 3.153370] EXT3-fs (xvda2): using internal journal [ 3.277115] ip_tables: (C) 2000-2006 Netfilter Core Team [ 3.336014] nf_conntrack version 0.5.0 (3899 buckets, 15596 max) [ 3.515604] init: failsafe main process (397) killed by TERM signal [ 3.801589] blkfront: barrier: write xvda2 op failed [ 3.801597] blkfront: xvda2: barrier or flush: disabled [ 3.801611] end_request: I/O error, dev xvda2, sector 52171168 [ 3.801630] end_request: I/O error, dev xvda2, sector 52171168 [ 3.801642] Buffer I/O error on device xvda2, logical block 6521396 [ 3.801652] lost page write due to I/O error on xvda2 [ 3.801755] Aborting journal on device xvda2. [ 3.804415] EXT3-fs (xvda2): error: ext3_journal_start_sb: Detected aborted journal [ 3.804434] EXT3-fs (xvda2): error: remounting filesystem read-only [ 3.814754] journal commit I/O error [ 6.973831] init: udev-fallback-graphics main process (538) terminated with status 1 [ 6.992267] init: plymouth-splash main process (546) terminated with status 1 The manpage of drbdsetup says that LVM (which I use) doesn't support barriers (better known as tagged command queuing or native command queing), so I configured the drbd device not to use barriers. This can be seen in /proc/drbd (by "wo:f, meaning flush, the next method drbd chooses after barrier): 3: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate C r---- ns:2160152 nr:520204 dw:2680344 dr:2678107 al:3549 bm:9183 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0 And on the other host: 3: cs:Connected ro:Secondary/Primary ds:UpToDate/UpToDate C r---- ns:0 nr:2160152 dw:2160152 dr:0 al:0 bm:8052 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0 I also enabled the option disable_sendpage, as per the drbd docs: cat /sys/module/drbd/parameters/disable_sendpage Y I also tried adding barriers=0 to fstab as mount option. Still it sometimes says: [ 58.603896] blkfront: barrier: write xvda2 op failed [ 58.603903] blkfront: xvda2: barrier or flush: disabled I don't even know if ext3 has a nobarrier option. And, because only one of my storage systems is battery backed, it would not be smart anyway. Why does it still compain about barriers when I disabled that? Both host are: Debian: 6.0.4 uname -a: Linux 2.6.32-5-xen-amd64 drbd: 8.3.7 Xen: 4.0.1 Guest: Ubuntu 12.04 LTS uname -a: Linux 3.2.0-24-generic pvops drbd resource: resource drbdvm { meta-disk internal; device /dev/drbd3; startup { # The timeout value when the last known state of the other side was available. 0 means infinite. wfc-timeout 0; # Timeout value when the last known state was disconnected. 0 means infinite. degr-wfc-timeout 180; } syncer { # This is recommended only for low-bandwidth lines, to only send those # blocks which really have changed. #csums-alg md5; # Set to about half your net speed rate 60M; # It seems that this option moved to the 'net' section in drbd 8.4. (later release than Debian has currently) verify-alg md5; } net { # The manpage says this is recommended only in pre-production (because of its performance), to determine # if your LAN card has a TCP checksum offloading bug. #data-integrity-alg md5; } disk { # Detach causes the device to work over-the-network-only after the # underlying disk fails. Detach is not default for historical reasons, but is # recommended by the docs. # However, the Debian defaults in drbd.conf suggest the machine will reboot in that event... on-io-error detach; # LVM doesn't support barriers, so disabling it. It will revert to flush. Check wo: in /proc/drbd. If you don't disable it, you get IO errors. no-disk-barrier; } on host1 { # universe is a VG disk /dev/universe/drbdvm-disk; address 10.0.0.1:7792; } on host2 { # universe is a VG disk /dev/universe/drbdvm-disk; address 10.0.0.2:7792; } } DomU cfg: bootloader = '/usr/lib/xen-default/bin/pygrub' vcpus = '2' memory = '512' # # Disk device(s). # root = '/dev/xvda2 ro' disk = [ 'phy:/dev/drbd3,xvda2,w', 'phy:/dev/universe/drbdvm-swap,xvda1,w', ] # # Hostname # name = 'drbdvm' # # Networking # # fake IP for posting vif = [ 'ip=1.2.3.4,mac=00:16:3E:22:A8:A7' ] # # Behaviour # on_poweroff = 'destroy' on_reboot = 'restart' on_crash = 'restart' In my test setup: the primary host's storage is 9650SE SATA-II RAID PCIe with battery. The secondary is software RAID1. Isn't DRBD+Xen widely used? With these problems, it's not going to work.

    Read the article

  • rm on a directory with millions of files

    - by BMDan
    Background: physical server, about two years old, 7200-RPM SATA drives connected to a 3Ware RAID card, ext3 FS mounted noatime and data=ordered, not under crazy load, kernel 2.6.18-92.1.22.el5, uptime 545 days. Directory doesn't contain any subdirectories, just millions of small (~100 byte) files, with some larger (a few KB) ones. We have a server that has gone a bit cuckoo over the course of the last few months, but we only noticed it the other day when it started being unable to write to a directory due to it containing too many files. Specifically, it started throwing this error in /var/log/messages: ext3_dx_add_entry: Directory index full! The disk in question has plenty of inodes remaining: Filesystem Inodes IUsed IFree IUse% Mounted on /dev/sda3 60719104 3465660 57253444 6% / So I'm guessing that means we hit the limit of how many entries can be in the directory file itself. No idea how many files that would be, but it can't be more, as you can see, than three million or so. Not that that's good, mind you! But that's part one of my question: exactly what is that upper limit? Is it tunable? Before I get yelled at--I want to tune it down; this enormous directory caused all sorts of issues. Anyway, we tracked down the issue in the code that was generating all of those files, and we've corrected it. Now I'm stuck with deleting the directory. A few options here: rm -rf (dir)I tried this first. I gave up and killed it after it had run for a day and a half without any discernible impact. unlink(2) on the directory: Definitely worth consideration, but the question is whether it'd be faster to delete the files inside the directory via fsck than to delete via unlink(2). That is, one way or another, I've got to mark those inodes as unused. This assumes, of course, that I can tell fsck not to drop entries to the files in /lost+found; otherwise, I've just moved my problem. In addition to all the other concerns, after reading about this a bit more, it turns out I'd probably have to call some internal FS functions, as none of the unlink(2) variants I can find would allow me to just blithely delete a directory with entries in it. Pooh. while [ true ]; do ls -Uf | head -n 10000 | xargs rm -f 2/dev/null; done ) This is actually the shortened version; the real one I'm running, which just adds some progress-reporting and a clean stop when we run out of files to delete, is: export i=0; time ( while [ true ]; do ls -Uf | head -n 3 | grep -qF '.png' || break; ls -Uf | head -n 10000 | xargs rm -f 2/dev/null; export i=$(($i+10000)); echo "$i..."; done ) This seems to be working rather well. As I write this, it's deleted 260,000 files in the past thirty minutes or so. Now, for the questions: As mentioned above, is the per-directory entry limit tunable? Why did it take "real 7m9.561s / user 0m0.001s / sys 0m0.001s" to delete a single file which was the first one in the list returned by "ls -U", and it took perhaps ten minutes to delete the first 10,000 entries with the command in #3, but now it's hauling along quite happily? For that matter, it deleted 260,000 in about thirty minutes, but it's now taken another fifteen minutes to delete 60,000 more. Why the huge swings in speed? Is there a better way to do this sort of thing? Not store millions of files in a directory; I know that's silly, and it wouldn't have happened on my watch. Googling the problem and looking through SF and SO offers a lot of variations on "find" that obviously have the wrong idea; it's not going to be faster than my approach for several self-evident reasons. But does the delete-via-fsck idea have any legs? Or something else entirely? I'm eager to hear out-of-the-box (or inside-the-not-well-known-box) thinking. Thanks for reading the small novel; feel free to ask questions and I'll be sure to respond. I'll also update the question with the final number of files and how long the delete script ran once I have that. Final script output!: 2970000... 2980000... 2990000... 3000000... 3010000... real 253m59.331s user 0m6.061s sys 5m4.019s So, three million files deleted in a bit over four hours.

    Read the article

  • java - BigDecimal

    - by Mk12
    I was trying to make my own class for currencies using longs, but Apparently I should use BigDecimal (and then whenever I print it just add the $ sign before it). Could someone please get me started? What would be the best way to use BigDecimals for Dollar currencies, like making it at least but no more than 2 decimal places for the cents, etc. The api for BigDecimal is huge, and I don't know which methods to use. Also, BigDecimal has better precision, but isn't that all lost if it passes through a double? if I do new BigDecimal(24.99), how will it be different than using a double? Or should I use the constructor that uses a String instead? EDIT: I decided to use BigDecimals, and then use: private static final java.text.NumberFormat moneyt = java.text.NumberFormat.getCurrencyInstance(); { money.setRoundingMode(RoundingMode.HALF_EVEN); } and then whenever I display the BigDecimals, to use money.format(theBigDecimal). Is this alright? Should I have the BigDecimal rounding it too? Because then it doesn't get rounded before it does another operation.. if so, could you show me how? And how should I create the BigDecimals? new BigDecimal("24.99") ? Well, after many comments to Vineet Reynolds (thanks for keeping coming back and answering), this is what I have decided. I use BigDecimals and a NumberFormat. Here is where I create the NumberFormat instance. private static final NumberFormat money; static { money = NumberFormat.getCurrencyInstance(Locale.CANADA); money.setRoundingMode(RoundingMode.HALF_EVEN); } Here is my BigDecimal: private final BigDecimal price; Whenever I want to display the price, or another BigDecimal that I got through calculations of price, I use: money.format(price) to get the String. Whenever I want to store the price, or a calculation from price, in a database or in a field or anywhere, I use (for a field): myCalculatedResult = price.add(new BigDecimal("34.58")).setScale(2, RoundingMode.HALF_EVEN); .. but I'm thinking now maybe I should not have the NumberFormat round, but when I want to display do this: System.out.print(money.format(price.setScale(2, RoundingMode.HALF_EVEN); That way to ensure the model and things displayed in the view are the same. I don't do: price = price.setScale(2, RoundingMode.HALF_EVEN); Because then it would always round to 2 decimal places and wouldn't be as precise in calculations. So its all solved now, I guess. But is there any shortcut to typing calculatedResult.setScale(2, RoundingMode.HALF_EVEN) all the time? All I can think of is static importing HALF_EVEN... EDIT: I've changed my mind a bit, I think if I store a value, I won't round it unless I have no more operations to do with it, e.g. if it was the final total that will be charged to someone. I will only round things at the end, or whenever necessary, and I will still use NumberFormat for the currency formatting, but since I always want rounding for display, I made a static method for display: public static String moneyFormat(BigDecimal price) { return money.format(price.setScale(2, RoundingMode.HALF_EVEN)); } So values stored in variables won't be rounded, and I'll use that method to display prices.

    Read the article

  • Why are there 3 conflicting OpenCV camera calibration formulas?

    - by John
    I'm having a problem with OpenCV's various parameterization of coordinates used for camera calibration purposes. The problem is that three different sources of information on image distortion formulae apparently give three non-equivalent description of the parameters and equations involved: (1) In their book "Learning OpenCV…" Bradski and Kaehler write regarding lens distortion (page 376): xcorrected = x * ( 1 + k1 * r^2 + k2 * r^4 + k3 * r^6 ) + [ 2 * p1 * x * y + p2 * ( r^2 + 2 * x^2 ) ], ycorrected = y * ( 1 + k1 * r^2 + k2 * r^4 + k3 * r^6 ) + [ p1 * ( r^2 + 2 * y^2 ) + 2 * p2 * x * y ], where r = sqrt( x^2 + y^2 ). Assumably, (x, y) are the coordinates of pixels in the uncorrected captured image corresponding to world-point objects with coordinates (X, Y, Z), camera-frame referenced, for which xcorrected = fx * ( X / Z ) + cx and ycorrected = fy * ( Y / Z ) + cy, where fx, fy, cx, and cy, are the camera's intrinsic parameters. So, having (x, y) from a captured image, we can obtain the desired coordinates ( xcorrected, ycorrected ) to produced an undistorted image of the captured world scene by applying the above first two correction expressions. However... (2) The complication arises as we look at OpenCV 2.0 C Reference entry under the Camera Calibration and 3D Reconstruction section. For ease of comparison we start with all world-point (X, Y, Z) coordinates being expressed with respect to the camera's reference frame, just as in #1. Consequently, the transformation matrix [ R | t ] is of no concern. In the C reference, it is expressed that: x' = X / Z, y' = Y / Z, x'' = x' * ( 1 + k1 * r'^2 + k2 * r'^4 + k3 * r'^6 ) + [ 2 * p1 * x' * y' + p2 * ( r'^2 + 2 * x'^2 ) ], y'' = y' * ( 1 + k1 * r'^2 + k2 * r'^4 + k3 * r'^6 ) + [ p1 * ( r'^2 + 2 * y'^2 ) + 2 * p2 * x' * y' ], where r' = sqrt( x'^2 + y'^2 ), and finally that u = fx * x'' + cx, v = fy * y'' + cy. As one can see these expressions are not equivalent to those presented in #1, with the result that the two sets of corrected coordinates ( xcorrected, ycorrected ) and ( u, v ) are not the same. Why the contradiction? It seems to me the first set makes more sense as I can attach physical meaning to each and every x and y in there, while I find no physical meaning in x' = X / Z and y' = Y / Z when the camera focal length is not exactly 1. Furthermore, one cannot compute x' and y' for we don't know (X, Y, Z). (3) Unfortunately, things get even murkier when we refer to the writings in Intel's Open Source Computer Vision Library Reference Manual's section Lens Distortion (page 6-4), which states in part: "Let ( u, v ) be true pixel image coordinates, that is, coordinates with ideal projection, and ( u ~, v ~ ) be corresponding real observed (distorted) image coordinates. Similarly, ( x, y ) are ideal (distortion-free) and ( x ~, y ~ ) are real (distorted) image physical coordinates. Taking into account two expansion terms gives the following: x ~ = x * ( 1 + k1 * r^2 + k2 * r^4 ) + [ 2 p1 * x * y + p2 * ( r^2 + 2 * x^2 ) ] y ~ = y * ( 1 + k1 * r^2 + k2 * r^4 ] + [ 2 p2 * x * y + p2 * ( r^2 + 2 * y^2 ) ], where r = sqrt( x^2 + y^2 ). ... "Because u ~ = cx + fx * u and v ~ = cy + fy * v , … the resultant system can be rewritten as follows: u ~ = u + ( u – cx ) * [ k1 * r^2 + k2 * r^4 + 2 * p1 * y + p2 * ( r^2 / x + 2 * x ) ] v ~ = v + ( v – cy ) * [ k1 * r^2 + k2 * r^4 + 2 * p2 * x + p1 * ( r^2 / y + 2 * y ) ] The latter relations are used to undistort images from the camera." Well, it would appear that the expressions involving x ~ and y ~ coincided with the two expressions given at the top of this writing involving xcorrected and ycorrected. However, x ~ and y ~ do not refer to corrected coordinates, according to the given description. I don't understand the distinction between the meaning of the coordinates ( x ~, y ~ ) and ( u ~, v ~ ), or for that matter, between the pairs ( x, y ) and ( u, v ). From their descriptions it appears their only distinction is that ( x ~, y ~ ) and ( x, y ) refer to 'physical' coordinates while ( u ~, v ~ ) and ( u, v ) do not. What is this distinction all about? Aren't they all physical coordinates? I'm lost! Thanks for any input!

    Read the article

  • Problems with show hide jquery

    - by Michael
    I am trying to get show hide to work on multiple objects but I am unable to get it to work. Any help would be nice an much appreciated... I am lost on how to do this. If I only do one show hide it works fine but more than one it does not work properly. <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <title>Show Hide Sample</title> <script src="js/jquery.js" type="text/javascript"></script> <script type="text/javascript"> $(document).ready(function(){ $('#content1').hide(); $('a').click(function(){ $('#content1').show('slow'); }); $('a#close').click(function(){ $('#content1').hide('slow'); }) }); </script> <style> body{font-size:12px; font-family:"Trebuchet MS"; background: #CCF} #content1{ border:1px solid #DDDDDD; padding:10px; margin-top:5px; width:300px; } </style> </head> <body> <a href="#" id="click">Test 1</a> <div class="box"> <div id="content1"> <h1 align="center">Hide 1</h1> <P> Lorem ipsum dolor sit amet, consectetuer adipiscing elit. Nullam pulvinar, enim ac hendrerit mattis, lorem mauris vestibulum tellus, nec porttitor diam nunc tempor dui. Aenean orci. Sed tempor diam eget tortor. Maecenas quis lorem. Nullam semper. Fusce adipiscing tellus non enim volutpat malesuada. Cras urna. Vivamus massa metus, tempus et, fermentum et, aliquet accumsan, lectus. Maecenas iaculis elit eget ipsum cursus lacinia. Mauris pulvinar.</p> <p><a href="#" id="close">Close</a></p> </div> </div> <a href="#" id="click">Test 2</a> <div class="box"> <div id="content1"> <h1 align="center">Hide 2</h1> <p> Lorem ipsum dolor sit amet, consectetuer adipiscing elit. Nullam pulvinar, enim ac hendrerit mattis, lorem mauris vestibulum tellus, nec porttitor diam nunc tempor dui. Aenean orci. Sed tempor diam eget tortor. Maecenas quis lorem. Nullam semper. Fusce adipiscing tellus non enim volutpat malesuada. Cras urna. Vivamus massa metus, tempus et, fermentum et, aliquet accumsan, lectus. Maecenas iaculis elit eget ipsum cursus lacinia. Mauris pulvinar.</p> <p><a href="#" id="close">Close</a></p> </div> </div> </body> </html>

    Read the article

  • How to Get The Output Of a command terminated by a alarm() call.

    - by rockyurock
    Case 1 If I run below command i.e iperf in UL only, then i am able to capture the o/p in txt file @output = readpipe("iperf.exe -u -c 127.0.0.1 -p 5001 -b 3600k -t 10 -i 1"); open FILE, ">Misplay_DL.txt" or die $!; print FILE @output; close FILE; Case 2 When I run iperf in DL mode , as we know server will start listening in cont. mode like below even after getting data from client (Here i am using server and client on LAN) @output = system("iperf.exe -u -s -p 5001 -i 1"); on server side: D:\_IOT_SESSION_RELATED\SEEM_ELEMESNTS_AT_COMM_PORT_CONF\Tput_Related_Tools\AUTO MATION_APP_\AUTOMATION_UTILITYiperf.exe -u -s -p 5001 ------------------------------------------------------------ Server listening on UDP port 5001 Receiving 1470 byte datagrams UDP buffer size: 8.00 KByte (default) ------------------------------------------------------------ [1896] local 192.168.5.101 port 5001 connected with 192.168.5.101 port 4878 [ ID] Interval Transfer Bandwidth Jitter Lost/Total Datagrams [1896] 0.0- 2.0 sec 881 KBytes 3.58 Mbits/sec 0.000 ms 0/ 614 (0%) command prompt does not appear , process is contd... on client side: D:\_IOT_SESSION_RELATED\SEEM_ELEMESNTS_AT_COMM_PORT_CONF\Tput_Related_Tools\AUTO MATION_APP_\AUTOMATION_UTILITYiperf.exe -u -c 192.168.5.101 -p 5001 -b 3600k -t 2 -i 1 ------------------------------------------------------------ Client connecting to 192.168.5.101, UDP port 5001 Sending 1470 byte datagrams UDP buffer size: 8.00 KByte (default) ------------------------------------------------------------ [1880] local 192.168.5.101 port 4878 connected with 192.168.5.101 port 5001 [ ID] Interval Transfer Bandwidth [1880] 0.0- 1.0 sec 441 KBytes 3.61 Mbits/sec [1880] 1.0- 2.0 sec 439 KBytes 3.60 Mbits/sec [1880] 0.0- 2.0 sec 881 KBytes 3.58 Mbits/sec [1880] Server Report: [1880] 0.0- 2.0 sec 881 KBytes 3.58 Mbits/sec 0.000 ms 0/ 614 (0%) [1880] Sent 614 datagrams D:\_IOT_SESSION_RELATED\SEEM_ELEMESNTS_AT_COMM_PORT_CONF\Tput_Related_Tools\AUTO MATION_APP_\AUTOMATION_UTILITY so with this as server is cont. listening and never terminates so can't take output of server side to a txt file as it is going to the next command itself to create a txt file so i adopted the alarm() function to terminate the server side (iperf.exe -u -s -p 5001) commands after it received all data from the client. could anybody suggest me the way.. Here is my code: #! /usr/bin/perl -w my $command = "iperf.exe -u -s -p 5001"; my @output; eval { local $SIG{ALRM} = sub { die "Timeout\n" }; alarm 20; #@output = `$command`; #my @output = readpipe("iperf.exe -u -s -p 5001"); #my @output = exec("iperf.exe -u -s -p 5001"); my @output = system("iperf.exe -u -s -p 5001"); alarm 0; }; if ($@) { warn "$command timed out.\n"; } else { print "$command successful. Output was:\n", @output; } open FILE, ">display.txt" or die $!; print FILE @output_1; close FILE; i know that with system command i cannot capture the o/p to a txt file but i tried with readpipe() and exec() calls also but in vain... could some one please take a look and let me know why the iperf.exe -u -s -p 5001 is not terminating even after the alarm call and to take the out put to a txt file

    Read the article

  • Qt, MSVC, and /Zc:wchar_t- == I want to blow up the world

    - by Noah Roberts
    So Qt is compiled with /Zc:wchar_t- on windows. What this means is that instead of wchar_t being a typedef for some internal type (__wchar_t I think) it becomes a typedef for unsigned short. The really cool thing about this is that the default for MSVC is the opposite, which of course means that the libraries you're using are likely compiled with wchar_t being a different type than Qt's wchar_t. This doesn't become an issue of course until you try to use something like std::wstring in your code; especially when one or more libraries have functions that accept it as parameters. What effectively happens is that your code happily compiles but then fails to link because it's looking for definitions using std::wstring<unsigned short...> but they only contain definitions expecting std::wstring<__wchar_t...> (or whatever). So I did some web searching and ran into this link: http://bugreports.qt.nokia.com/browse/QTBUG-6345 Based on the statement by Thiago Macieira, "Sorry, we will not support building Qt like this," I've been worried that fixing Qt to work like everything else might cause some problem and have been trying to avoid it. We recompiled all of our support libraries with the /Zc:wchar_t- flag and have been fairly content with that until a couple days ago when we started trying to port over (we're in the process of switching from Wx to Qt) some serialization code. Because of how win32 works, and because Wx just wraps win32, we've been using std::wstring to represent string data with the intent of making our product as i18n ready as possible. We did some testing and Wx did not work with multibyte characters when trying to print special stuff (even not so special stuff like the degree symbol was an issue). I'm not so sure that Qt has this problem since QString isn't just a wrapper to the underlying _TCHAR type but is a Unicode monster of some sort. At any rate, the serialization library in boost has compiled parts. We've attempted to recompile boost with /Zc:wchar_t- but so far our attempts to tell bjam to do this have gone unheeded. We're at an impasse. From where I'm sitting I have three options: Recompile Qt and hope it works with /Zc:wchar_t. There's some evidence around the web that others have done this but I have no way of predicting what will happen. All attempts to ask Qt people on forums and such have gone unanswered. Hell, even in that very bug report someone asks why and it just sat there for a year. Keep fighting with bjam until it listens. Right now I've got someone under me doing that and I have more experience fighting with things to get what I want but I do have to admit to getting rather tired of it. I'm also concerned that I'll KEEP running into this issue just because Qt wants to be a c**t. Stop using wchar_t for anything. Unfortunately my i18n experience is pretty much 0 but it seems to me that I just need to find the right to/from function in QString (it has a BUNCH) to encode the Unicode into 8-bytes and visa-versa. UTF8 functions look promising but I really want to be sure that no data will be lost if someone from Zimbabfuckegypt starts writing in their own language and the documentation in QString frightens me a little into thinking that could happen. Of course, I could always run into some library that insists I use wchar_t and then I'm back to 1 or 2 but I rather doubt that would happen. So, what's my question... Which of these options is my best bet? Is Qt going to eventually cause me to gouge out my own eyes because I decided to compile it with /Zc:wchar_t anyway? What's the magic incantation to get boost to build with /Zc:wchar_t- and will THAT cause permanent mental damage? Can I get away with just using the standard 8-bit (well, 'common' anyway) character classes and be i18n compliant/ready? How do other Qt developers deal with this mess?

    Read the article

  • Pure JSP without mixing HTML, by writing html as Java-like code

    - by ADTC
    Please read before answering. This is a fantasy programming technique I'm dreaming up. I want to know if there's anything close in real life. The following JSP page: <% html { head { title {"Pure fantasy";} } body { h1 {"A heading with double quote (\") character";} p {"a paragraph";} String s = "a paragraph in string. the date is "; p { s; new Date().toString(); } table (Border.ZERO, new Padding(27)) { tr { for (int i = 0; i < 10; i++) { td {i;} } } } } } %> could generate the following HTML page: <html> <head> <title>Pure fantasy</title> </head> <body> <h1>A heading with double quote (") character</h1> <p>a paragraph</p> <p>a paragraph in string. the date is 11 December 2012</p> <table border="0" padding="27"> <tr> <td>0</td> <td>1</td> <td>2</td> <td>3</td> <td>4</td> <td>5</td> <td>6</td> <td>7</td> <td>8</td> <td>9</td> </tr> </table> </body> </html> The thing about this fantasy is it reuses the same old Java programming language technique that enable customized keywords used in a way similar to if-else-then, while, try-catch etc to represent html tags in a non-html way that can easily be checked for syntactic correctness, and most importantly can easily be mixed up with regular Java code without being lost in a sea of <%, %>, <%=, out.write(), etc. An added feature is that strings can directly be placed as commands to print out into generated HTML, something Java doesn't support (where pure strings have to be assigned to variables before use). Is there anything in real life that comes close? If not, is it possible to define customized keywords in Java or JSP? Or do I have to create an entirely new programming language for that? What problems do you see with this kind of setup? PS: I know you can use HTML libraries to construct HTML using Java code, but the problem with such libraries is, the source code itself doesn't have a readable HTML representation like the code above does - if you get what I mean.

    Read the article

  • fd partitions gone from 2 discs, md happy with it and resyncs. How to recover ?

    - by d0nd
    Hey gurus, need some help badly with this one. I run a server with a 6Tb md raid5 volume built over 7*1Tb disks. I've had to shut down the server lately and when it went back up, 2 out of the 7 disks used for the raid volume had lost its conf : dmesg : [ 10.184167] sda: sda1 sda2 sda3 // System disk [ 10.202072] sdb: sdb1 [ 10.210073] sdc: sdc1 [ 10.222073] sdd: sdd1 [ 10.229330] sde: sde1 [ 10.239449] sdf: sdf1 [ 11.099896] sdg: unknown partition table [ 11.255641] sdh: unknown partition table All 7 disks have same geometry and were configured alike : dmesg : Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x1e7481a5 Device Boot Start End Blocks Id System /dev/sdb1 1 121601 976760001 fd Linux raid autodetect All 7 disks (sdb1, sdc1, sdd1, sde1, sdf1, sdg1, sdh1) were used in a md raid5 xfs volume. When booting, md, which was (obviously) out of sync kicked in and automatically started rebuilding over the 7 disks, including the two "faulty" ones; xfs tried to do some shenanigans as well: dmesg : [ 19.566941] md: md0 stopped. [ 19.817038] md: bind<sdc1> [ 19.817339] md: bind<sdd1> [ 19.817465] md: bind<sde1> [ 19.817739] md: bind<sdf1> [ 19.817917] md: bind<sdh> [ 19.818079] md: bind<sdg> [ 19.818198] md: bind<sdb1> [ 19.818248] md: md0: raid array is not clean -- starting background reconstruction [ 19.825259] raid5: device sdb1 operational as raid disk 0 [ 19.825261] raid5: device sdg operational as raid disk 6 [ 19.825262] raid5: device sdh operational as raid disk 5 [ 19.825264] raid5: device sdf1 operational as raid disk 4 [ 19.825265] raid5: device sde1 operational as raid disk 3 [ 19.825267] raid5: device sdd1 operational as raid disk 2 [ 19.825268] raid5: device sdc1 operational as raid disk 1 [ 19.825665] raid5: allocated 7334kB for md0 [ 19.825667] raid5: raid level 5 set md0 active with 7 out of 7 devices, algorithm 2 [ 19.825669] RAID5 conf printout: [ 19.825670] --- rd:7 wd:7 [ 19.825671] disk 0, o:1, dev:sdb1 [ 19.825672] disk 1, o:1, dev:sdc1 [ 19.825673] disk 2, o:1, dev:sdd1 [ 19.825675] disk 3, o:1, dev:sde1 [ 19.825676] disk 4, o:1, dev:sdf1 [ 19.825677] disk 5, o:1, dev:sdh [ 19.825679] disk 6, o:1, dev:sdg [ 19.899787] PM: Starting manual resume from disk [ 28.663228] Filesystem "md0": Disabling barriers, not supported by the underlying device [ 28.663228] XFS mounting filesystem md0 [ 28.884433] md: resync of RAID array md0 [ 28.884433] md: minimum _guaranteed_ speed: 1000 KB/sec/disk. [ 28.884433] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for resync. [ 28.884433] md: using 128k window, over a total of 976759936 blocks. [ 29.025980] Starting XFS recovery on filesystem: md0 (logdev: internal) [ 32.680486] XFS: xlog_recover_process_data: bad clientid [ 32.680495] XFS: log mount/recovery failed: error 5 [ 32.682773] XFS: log mount failed I ran fdisk and flagged sdg1 and sdh1 as fd. I tried to reassemble the array but it didnt work: no matter what was in mdadm.conf, it still uses sdg and sdh instead of sdg1 and sdh1. I checked in /dev and I see no sdg1 and and sdh1, shich explains why it wont use it. I just don't know why those partitions are gone from /dev and how to readd those... blkid : /dev/sda1: LABEL="boot" UUID="519790ae-32fe-4c15-a7f6-f1bea8139409" TYPE="ext2" /dev/sda2: TYPE="swap" /dev/sda3: LABEL="root" UUID="91390d23-ed31-4af0-917e-e599457f6155" TYPE="ext3" /dev/sdb1: UUID="2802e68a-dd11-c519-e8af-0d8f4ed72889" TYPE="mdraid" /dev/sdc1: UUID="2802e68a-dd11-c519-e8af-0d8f4ed72889" TYPE="mdraid" /dev/sdd1: UUID="2802e68a-dd11-c519-e8af-0d8f4ed72889" TYPE="mdraid" /dev/sde1: UUID="2802e68a-dd11-c519-e8af-0d8f4ed72889" TYPE="mdraid" /dev/sdf1: UUID="2802e68a-dd11-c519-e8af-0d8f4ed72889" TYPE="mdraid" /dev/sdg: UUID="2802e68a-dd11-c519-e8af-0d8f4ed72889" TYPE="mdraid" /dev/sdh: UUID="2802e68a-dd11-c519-e8af-0d8f4ed72889" TYPE="mdraid" fdisk -l : Disk /dev/sda: 40.0 GB, 40020664320 bytes 255 heads, 63 sectors/track, 4865 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x8c878c87 Device Boot Start End Blocks Id System /dev/sda1 * 1 12 96358+ 83 Linux /dev/sda2 13 134 979965 82 Linux swap / Solaris /dev/sda3 135 4865 38001757+ 83 Linux Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x1e7481a5 Device Boot Start End Blocks Id System /dev/sdb1 1 121601 976760001 fd Linux raid autodetect Disk /dev/sdc: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0xc9bdc1e9 Device Boot Start End Blocks Id System /dev/sdc1 1 121601 976760001 fd Linux raid autodetect Disk /dev/sdd: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0xcc356c30 Device Boot Start End Blocks Id System /dev/sdd1 1 121601 976760001 fd Linux raid autodetect Disk /dev/sde: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0xe87f7a3d Device Boot Start End Blocks Id System /dev/sde1 1 121601 976760001 fd Linux raid autodetect Disk /dev/sdf: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0xb17a2d22 Device Boot Start End Blocks Id System /dev/sdf1 1 121601 976760001 fd Linux raid autodetect Disk /dev/sdg: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x8f3bce61 Device Boot Start End Blocks Id System /dev/sdg1 1 121601 976760001 fd Linux raid autodetect Disk /dev/sdh: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0xa98062ce Device Boot Start End Blocks Id System /dev/sdh1 1 121601 976760001 fd Linux raid autodetect I really dont know what happened nor how to recover from this mess. Needless to say the 5TB or so worth of data sitting on those disks are very valuable to me... Any idea any one? Did anybody ever experienced a similar situation or know how to recover from it ? Can someone help me? I'm really desperate... :x

    Read the article

  • Weighted round robins via TTL - possible?

    - by Joe Hopfgartner
    I currently use DNS round robin for load balancing, which works great. The records look like this (I have a ttl of 120 seconds) ;; ANSWER SECTION: orion.2x.to. 116 IN A 80.237.201.41 orion.2x.to. 116 IN A 87.230.54.12 orion.2x.to. 116 IN A 87.230.100.10 orion.2x.to. 116 IN A 87.230.51.65 I learned that not every ISP / device treats such a response the same way. For example some DNS servers rotate the addresses randomly or always cycle them through. Some just propagate the first entry, others try to determine which is best (regionally near) by looking at the ip address. However if the userbase is big enough (spreads over multiple ISPs etc) it balances pretty well. The discrepancies from highest to lowest loaded server hardly every exceeds 15%. However now I have the problem that I am introducing more servers into the systems, that not all have the same capacities. I currently only have 1gbps servers, but I want to work with 100mbit and also 10gbps servers too. So what I want is I want to introduce a server with 10 GBps with a weight of 100, a 1 gbps server with a weight of 10 and a 100 mbit server with a weight of 1. I used to add servers twice to bring more traffic to them (which worked nice. the bandwidth doubled almost.) But adding a 10gbit server 100 times to DNS is a bit rediculous. So I thought about using the TTL. If I give server A 240 seconds ttl and server B only 120 seconds (which is about about the minimum to use for round robin, as a lot of dns servers set to 120 if a lower ttl is specified.. so i have heard) I think something like this should occour in an ideal scenario: first 120 seconds 50% of requests get server A -> keep it for 240 seconds. 50% of requests get server B -> keep it for 120 seconds second 120 seconds 50% of requests still have server A cached -> keep it for another 120 seconds. 25% of requests get server A -> keep it for 240 seconds 25% of requests get server B -> keep it for 120 seconds third 120 seconds 25% will get server A (from the 50% of Server A that now expired) -> cache 240 sec 25% will get server B (from the 50% of Server A that now expired) -> cache 120 sec 25% will have server A cached for another 120 seconds 12.5% will get server B (from the 25% of server B that now expired) -> cache 120sec 12.5% will get server A (from the 25% of server B that now expired) -> cache 240 sec fourth 120 seconds 25% will have server A cached -> cache for another 120 secs 12.5% will get server A (from the 25% of b that now expired) -> cache 240 secs 12.5% will get server B (from the 25% of b that now expired) -> cache 120 secs 12.5% will get server A (from the 25% of a that now expired) -> cache 240 secs 12.5% will get server B (from the 25% of a that now expired) -> cache 120 secs 6.25% will get server A (from the 12.5% of b that now expired) -> cache 240 secs 6.25% will get server B (from the 12.5% of b that now expired) -> cache 120 secs 12.5% will have server A cached -> cache another 120 secs ... i think i lost something at this point but i think you get the idea.... As you can see this gets pretty complicated to predict and it will for sure not work out like this in practice. But it should definitely have an effect on the distribution! I know that weighted round robin exists and is just controlled by the root server. It just cycles through dns records when responding and returns dns records with a set propability that corresponds to the weighting. My DNS server does not support this, and my requirements are not that precise. If it doesnt weight perfectly its okay, but it should go into the right direction. I think using the TTL field could be a more elegant and easier solution - and it deosnt require a dns server that controls this dynamically, which saves resources - which is in my opinion the whole point of dns load balancing vs hardware load balancers. My question now is... are there any best prectices / methos / rules of thumb to weight round robin distribution using the TTL attribute of DNS records? Edit: The system is a forward proxy server system. The amount of Bandwidth (not requests) exceeds what one single server with ethernet can handle. So I need a balancing solution that distributes the bandwidth to several servers. Are there any alternative methods than using DNS? Of course I can use a load balancer with fibre channel etc, but the costs are rediciulous and it also increases only the width of the bottleneck and does not eliminate it. The only thing i can think of are anycast (is it anycast or multicast?) ip addresses, but I don't have the means to set up such a system.

    Read the article

  • Remote host: can tracert, can telnet, can*not* browse: what gives?

    - by MacThePenguin
    One of my customers of the company I work for has made a change to their Internet connection, and now we can't connect to them any more from our LAN. To help me troubleshoot this issue, the network guy on the customer's site has configured their firewall so that a HTTPS connection to their public IP address is open to any IP. I should put https://<customer's IP> in my browser and get a web page. Well, it works from any network I've tried (even from my smartphone), just not from my company's LAN. I thought it may be an issue with our firewall (though I checked its rules and it allows outbound TCP port 443 to anywhere), so I just connected a PC directly to the network connection of our provider, bypassing out firewall completely, and still it didn't work (everything else worked). So I asked for help to our Internet provider's customer service, and they asked me to do a tracert to our customer's IP. The tracert is successful, as the final hop shown in the output is the host I want to reach. So they said there's no problem. :( I also tried telnet <customer's IP> 443 and that works as well: I get a blank page with the cursor blinking (I've tried using another random port and that gives me an error message, as it should). Still, from any browser of any PC in my LAN I can't open that URL. I tried checking the network traffic with Wireshark: I see the packages going through and answers coming back, thought the packets I see passing are far less than they are if I successfully connect to another HTTPS website. See the attached screenshot: I had to blur the IPs, anyway the longer string is my PC's local IP address, the shorter one is the customer's public IP. I don't know what else to try. This is the only IP doing this... Any idea what could I try to find a solution to this issue? Thanks, let me know if you need further details. Edit: when I say "it doesn't work" I mean: the page doesn't open, the browser keeps loading for a long time and eventually shows an error saying that the page cannot be opened. I'm not in my office now so I can't paste the exact message, but it's the usual message you get when the browser reaches its timeout. When I say "it works", I mean the browser loads and shows a webpage (it's the logon page for the customers' firewall admin interface: so there's the firewall brand's logo and there are fields to enter a user id and a password). Update 13/09/2012: tried again to connect to the customer's network through our Internet connection without a firewall. This is what I did: Run a Kubuntu 12.04 live distro on a spare laptop; Updated all the packages I could and installed WireShark; Attached it to my LAN and verified that I couldn't open https://<customer's IP>. Verified that the Wireshark trace for this attempt was the same as the one I've already posted; Verified that I could connect to another customer's host using rdesktop (it worked); Tried to rdesktop to <customer's IP>, here's the output: kubuntu@kubuntu:/etc$ rdesktop <customer's IP> Autoselected keyboard map en-us ERROR: recv: Connection reset by peer Disconnected the laptop from the LAN; Disconnected the firewall from the Extranet connection, connected the laptop instead. Set its network configuration so that I could access the Internet; Verified that I could connect to other websites in http and https and in RDP to other customers' hosts - it all worked as expected; Verified that I could still traceroute to <customer's IP>: I could; Verified that I still couldn't open https://<customer's IP> (same exact result as before); Checked the WireShark trace for this attempt and noticed a different behaviour: I could see packets going out to the customer's IP, but no replies at all; Tried to run rdesktop again, with a slightly different result: kubuntu@kubuntu:/etc/network$ rdesktop <customer's IP> Autoselected keyboard map en-us ERROR: <customer's IP>: unable to connect Finally gave up, put everything back as it was before, turned off the laptop and lost the WireShark traces I had saved. :( I still remember them very well though. :) Can you get anything out of it? Thank you very much. Update 12/09/2012 n.2: I followed the suggestion by MadHatter in the comments. From inside the firewall, this is what I get: user@ubuntu-mantis:~$ openssl s_client -connect <customer's IP>:443 CONNECTED(00000003) If I now type GET / the output pauses for several seconds and then I get: write:errno=104 I'm going to try the same, but bypassing the firewall, as soon as I can. Thanks. Update 12/09/2012 n.3: So, I think ISA Server is altering the results of my tests... I tried installing Wireshark directly on the firewall and monitoring the packets on the Extranet network card. When the destination is the customer's IP, whatever service I try to connect to (HTTPS, RDP or SAProuter), I can only see outbound packets and no response packets whatsoever from their side. It looks like ISA Server is "faking" the remote server's replies, that's why I get a connection using telnet or the openSSL client. This is the wireshark trace from inside our LAN: But this is the trace on the Extranet network card: This makes a bit more sense... I'll send this info to the customer's tech and see if he can make anything out of it. Thanks to all that took the time to read my question and post suggestions. I'll update this post again.

    Read the article

  • Rx IObservable buffering to smooth out bursts of events

    - by Dan
    I have an Observable sequence that produces events in rapid bursts (ie: five events one right after another, then a long delay, then another quick burst of events, etc.). I want to smooth out these bursts by inserting a short delay between events. Imagine the following diagram as an example: Raw: --oooo--------------ooooo-----oo----------------ooo| Buffered: --o--o--o--o--------o--o--o--o--o--o--o---------o--o--o| My current approach is to generate a metronome-like timer via Observable.Interval() that signals when it's ok to pull another event from the raw stream. The problem is that I can't figure out how to then combine that timer with my raw unbuffered observable sequence. IObservable.Zip() is close to doing what I want, but it only works so long as the raw stream is producing events faster than the timer. As soon as there is a significant lull in the raw stream, the timer builds up a series of unwanted events that then immediately pair up with the next burst of events from the raw stream. Ideally, I want an IObservable extension method with the following function signature that produces the bevaior I've outlined above. Now, come to my rescue StackOverflow :) public static IObservable<T> Buffered(this IObservable<T> src, TimeSpan minDelay) PS. I'm brand new to Rx, so my apologies if this is a trivially simple question... 1. Simple yet flawed approach Here's my initial naive and simplistic solution that has quite a few problems: public static IObservable<T> Buffered<T>(this IObservable<T> source, TimeSpan minDelay) { Queue<T> q = new Queue<T>(); source.Subscribe(x => q.Enqueue(x)); return Observable.Interval(minDelay).Where(_ => q.Count > 0).Select(_ => q.Dequeue()); } The first obvious problem with this is that the IDisposable returned by the inner subscription to the raw source is lost and therefore the subscription can't be terminated. Calling Dispose on the IDisposable returned by this method kills the timer, but not the underlying raw event feed that is now needlessly filling the queue with nobody left to pull events from the queue. The second problem is that there's no way for exceptions or end-of-stream notifications to be propogated through from the raw event stream to the buffered stream - they are simply ignored when subscribing to the raw source. And last but not least, now I've got code that wakes up periodically regardless of whether there is actually any work to do, which I'd prefer to avoid in this wonderful new reactive world. 2. Way overly complex appoach To solve the problems encountered in my initial simplistic approach, I wrote a much more complicated function that behaves much like IObservable.Delay() (I used .NET Reflector to read that code and used it as the basis of my function). Unfortunately, a lot of the boilerplate logic such as AnonymousObservable is not publicly accessible outside the system.reactive code, so I had to copy and paste a lot of code. This solution appears to work, but given its complexity, I'm less confident that its bug free. I just can't believe that there isn't a way to accomplish this using some combination of the standard Reactive extensions. I hate feeling like I'm needlessly reinventing the wheel, and the pattern I'm trying to build seems like a fairly standard one.

    Read the article

  • How to show form in front in C#

    - by corlettk
    Folks, Please does anyone know how to show a Form from an otherwise invisible application, and have it get the focus (i.e. appear on top of other windows)? I'm working in C# .NET 3.5. I suspect I've taken "completely the wrong approach"... I do not Application.Run(new TheForm ()) instead I (new TheForm()).ShowModal()... The Form is basically a modal dialogue, with a few check-boxes; a text-box, and OK and Cancel Buttons. The user ticks a checkbox and types in a description (or whatever) then presses OK, the form disappears and the process reads the user-input from the Form, Disposes it, and continues processing. This works, except when the form is show it doesn't get the focus, instead it appears behind the "host" application, until you click on it in the taskbar (or whatever). This is a most annoying behaviour, which I predict will cause many "support calls", and the existing VB6 version doesn't have this problem, so I'm going backwards in usability... and users won't accept that (and nor should they). So... I'm starting to think I need to rethink the whole shebang... I should show the form up front, as a "normal application" and attach the remainer of the processing to the OK-button-click event. It should work, But that will take time which I don't have (I'm already over time/budget)... so first I really need to try to make the current approach work... even by quick-and-dirty methods. So please does anyone know how to "force" a .NET 3.5 Form (by fair means or fowl) to get the focus? I'm thinking "magic" windows API calls (I know Twilight Zone: This only appears to be an issue at work, we're I'm using Visual Studio 2008 on Windows XP SP3... I've just failed to reproduce the problem with an SSCCE (see below) at home on Visual C# 2008 on Vista Ulimate... This works fine. Huh? WTF? Also, I'd swear that at work yesterday showed the form when I ran the EXE, but not when F5'ed (or Ctrl-F5'ed) straight from the IDE (which I just put up with)... At home the form shows fine either way. Totaly confusterpating! It may or may not be relevant, but Visual Studio crashed-and-burned this morning when the project was running in debug mode and editing the code "on the fly"... it got stuck what I presumed was an endless loop of error messages. The error message was something about "can't debug this project because it is not the current project, or something... So I just killed it off with process explorer. It started up again fine, and even offered to recover the "lost" file, an offer which I accepted. using System; using System.Windows.Forms; namespace ShowFormOnTop { static class Program { [STAThread] static void Main() { Application.EnableVisualStyles(); Application.SetCompatibleTextRenderingDefault(false); //Application.Run(new Form1()); Form1 frm = new Form1(); frm.ShowDialog(); } } } Background: I'm porting an existing VB6 implementation to .NET... It's a "plugin" for a "client" GIS application called MapInfo. The existing client "worked invisibly" and my instructions are "to keep the new version as close as possible to the old version", which works well enough (after years of patching); it's just written in an unsupported language, so we need to port it. About me: I'm pretty much a noob to C# and .NET generally, though I've got a bottoms wiping certificate, I have been a professional programmer for 10 years; So I sort of "know some stuff". Any insights would be most welcome... and Thank you all for taking the time to read this far. Consiseness isn't (apparently) my forte. Cheers. Keith.

    Read the article

  • help needed on deciphering the g++ vtable dumps

    - by Ganesh Kundapur
    Hi, for the fallow class hierarchy class W { public: virtual void f() { cout << "W::f()" << endl; } virtual void g() { cout << "W::g()" << endl; } }; class AW : public virtual W { public: void g() { cout << "AW::g()" << endl; } }; class BW : public virtual W { public: void f() { cout << "BW::f()" << endl; } }; class CW : public AW, public BW { }; g++ -fdump-class-hierarchy is Vtable for W W::_ZTV1W: 4u entries 0 (int ()(...))0 4 (int ()(...))(& _ZTI1W) 8 W::f 12 W::g Class W size=4 align=4 base size=4 base align=4 W (0xb6e3da50) 0 nearly-empty vptr=((& W::_ZTV1W) + 8u) Vtable for AW AW::_ZTV2AW: 7u entries 0 0u 4 0u 8 0u 12 (int ()(...))0 16 (int ()(...))(& _ZTI2AW) 20 W::f 24 AW::g VTT for AW AW::_ZTT2AW: 2u entries 0 ((& AW::_ZTV2AW) + 20u) 4 ((& AW::_ZTV2AW) + 20u) Class AW size=4 align=4 base size=4 base align=4 AW (0xb6dbf6c0) 0 nearly-empty vptridx=0u vptr=((& AW::_ZTV2AW) + 20u) W (0xb6e3da8c) 0 nearly-empty virtual primary-for AW (0xb6dbf6c0) vptridx=4u vbaseoffset=-0x00000000000000014 Vtable for BW BW::_ZTV2BW: 7u entries 0 0u 4 0u 8 0u 12 (int ()(...))0 16 (int ()(...))(& _ZTI2BW) 20 BW::f 24 W::g VTT for BW BW::_ZTT2BW: 2u entries 0 ((& BW::_ZTV2BW) + 20u) 4 ((& BW::_ZTV2BW) + 20u) Class BW size=4 align=4 base size=4 base align=4 BW (0xb6dbf7c0) 0 nearly-empty vptridx=0u vptr=((& BW::_ZTV2BW) + 20u) W (0xb6e3dac8) 0 nearly-empty virtual primary-for BW (0xb6dbf7c0) vptridx=4u vbaseoffset=-0x00000000000000014 Vtable for CW CW::_ZTV2CW: 14u entries 0 0u 4 0u 8 4u 12 (int ()(...))0 16 (int ()(...))(& _ZTI2CW) 20 BW::_ZTv0_n12_N2BW1fEv 24 AW::g 28 4294967292u 32 4294967292u 36 0u 40 (int ()(...))-0x00000000000000004 44 (int ()(...))(& _ZTI2CW) 48 BW::f 52 0u Construction vtable for AW (0xb6dbf8c0 instance) in CW CW::_ZTC2CW0_2AW: 7u entries 0 0u 4 0u 8 0u 12 (int ()(...))0 16 (int ()(...))(& _ZTI2AW) 20 W::f 24 AW::g Construction vtable for BW (0xb6dbf900 instance) in CW CW::_ZTC2CW4_2BW: 13u entries 0 4294967292u 4 4294967292u 8 0u 12 (int ()(...))0 16 (int ()(...))(& _ZTI2BW) 20 BW::f 24 0u 28 0u 32 4u 36 (int ()(...))4 40 (int ()(...))(& _ZTI2BW) 44 BW::_ZTv0_n12_N2BW1fEv 48 W::g VTT for CW CW::_ZTT2CW: 7u entries 0 ((& CW::_ZTV2CW) + 20u) 4 ((& CW::_ZTC2CW0_2AW) + 20u) 8 ((& CW::_ZTC2CW0_2AW) + 20u) 12 ((& CW::_ZTC2CW4_2BW) + 20u) 16 ((& CW::_ZTC2CW4_2BW) + 44u) 20 ((& CW::_ZTV2CW) + 20u) 24 ((& CW::_ZTV2CW) + 48u) Class CW size=8 align=4 base size=8 base align=4 CW (0xb6bea2d0) 0 vptridx=0u vptr=((& CW::_ZTV2CW) + 20u) AW (0xb6dbf8c0) 0 nearly-empty primary-for CW (0xb6bea2d0) subvttidx=4u W (0xb6e3db04) 0 nearly-empty virtual primary-for AW (0xb6dbf8c0) vptridx=20u vbaseoffset=-0x00000000000000014 BW (0xb6dbf900) 4 nearly-empty lost-primary subvttidx=12u vptridx=24u vptr=((& CW::_ZTV2CW) + 48u) W (0xb6e3db04) alternative-path what are each entries in Vtable for AW AW::_ZTV2AW: 7u entries 0 0u // ? 4 0u // ? 8 0u // ? Vtable for CW CW::_ZTV2CW: 14u entries 0 0u // ? 4 0u // ? 8 4u // ? 12 (int ()(...))0 16 (int ()(...))(& _ZTI2CW) 20 BW::_ZTv0_n12_N2BW1fEv // ? 24 AW::g 28 4294967292u // ? 32 4294967292u // ? 36 0u // ? 40 (int ()(...))-0x00000000000000004 // some delta 44 (int ()(...))(& _ZTI2CW) 48 BW::f 52 0u // ? Thanks, Ganesh

    Read the article

< Previous Page | 153 154 155 156 157 158 159 160 161 162 163 164  | Next Page >