Search Results

Search found 1311 results on 53 pages for 'dr gianluigi zane zanettini'.

Page 45/53 | < Previous Page | 41 42 43 44 45 46 47 48 49 50 51 52  | Next Page >

  • What's up with stat on Macos/Darwin? Or filesystems without names...

    - by Charles Stewart
    In response to a question I asked on SO, Give the mount point of a path, one respondant suggested using stat to get the device name associated with the volume of a given path. This works nicely on Linux, but gives crazy results on Macos 10.4. For my system, df and mount give: cas cas$ df Filesystem 512-blocks Used Avail Capacity Mounted on /dev/disk0s3 58342896 49924456 7906440 86% / devfs 194 194 0 100% /dev fdesc 2 2 0 100% /dev 1024 1024 0 100% /.vol automount -nsl [166] 0 0 0 100% /Network automount -fstab [170] 0 0 0 100% /automount/Servers automount -static [170] 0 0 0 100% /automount/static /dev/disk2s1 163577856 23225520 140352336 14% /Volumes/Snapshot /dev/disk2s2 409404102 5745938 383187960 1% /Volumes/Sparse cas cas$ mount /dev/disk0s3 on / (local, journaled) devfs on /dev (local) fdesc on /dev (union) on /.vol automount -nsl [166] on /Network (automounted) automount -fstab [170] on /automount/Servers (automounted) automount -static [170] on /automount/static (automounted) /dev/disk2s1 on /Volumes/Snapshot (local, nodev, nosuid, journaled) /dev/disk2s2 on /Volumes/Sparse (asynchronous, local, nodev, nosuid) Trying to get the devices from the mount points, though: cas cas$ df | grep -e/ | awk '{print $NF}' | while read line; do echo $line $(stat -f"%Sdr" $line); done / disk0s3r /dev ???r /dev ???r /.vol ???r /Network ???r /automount/Servers ???r /automount/static ???r /Volumes/Snapshot disk2s1r /Volumes/Sparse disk2s2r Here, I'm feeding each of the mount points scraped from df to stat, outputing the results of the "%Sdr" format string, which is supposed to be the device name: Cf. stat(1) man page: The special output specifier S may be used to indicate that the output, if applicable, should be in string format. May be used in combination with: ... dr Display actual device name. What's going on? Is it a bug in stat, or some Darwin VFS weirdness? Postscript Per Andrew McGregor, try passing "%Sd" to stat for more weirdness. It lists some apparently arbitrary subset of files from CWD...

    Read the article

  • What's up with stat on Mac OS X/Darwin? Or filesystems without names...

    - by Charles Stewart
    In response to a question I asked on SO, Give the mount point of a path, one respondant suggested using stat to get the device name associated with the volume of a given path. This works nicely on Linux, but gives crazy results on Mac OS X 10.4. For my system, df and mount give: cas cas$ df Filesystem 512-blocks Used Avail Capacity Mounted on /dev/disk0s3 58342896 49924456 7906440 86% / devfs 194 194 0 100% /dev fdesc 2 2 0 100% /dev <volfs> 1024 1024 0 100% /.vol automount -nsl [166] 0 0 0 100% /Network automount -fstab [170] 0 0 0 100% /automount/Servers automount -static [170] 0 0 0 100% /automount/static /dev/disk2s1 163577856 23225520 140352336 14% /Volumes/Snapshot /dev/disk2s2 409404102 5745938 383187960 1% /Volumes/Sparse cas cas$ mount /dev/disk0s3 on / (local, journaled) devfs on /dev (local) fdesc on /dev (union) <volfs> on /.vol automount -nsl [166] on /Network (automounted) automount -fstab [170] on /automount/Servers (automounted) automount -static [170] on /automount/static (automounted) /dev/disk2s1 on /Volumes/Snapshot (local, nodev, nosuid, journaled) /dev/disk2s2 on /Volumes/Sparse (asynchronous, local, nodev, nosuid) Trying to get the devices from the mount points, though: cas cas$ df | grep -e/ | awk '{print $NF}' | while read line; do echo $line $(stat -f"%Sdr" $line); done / disk0s3r /dev ???r /dev ???r /.vol ???r /Network ???r /automount/Servers ???r /automount/static ???r /Volumes/Snapshot disk2s1r /Volumes/Sparse disk2s2r Here, I'm feeding each of the mount points scraped from df to stat, outputting the results of the "%Sdr" format string, which is supposed to be the device name: Cf. stat(1) man page: The special output specifier S may be used to indicate that the output, if applicable, should be in string format. May be used in combination with: ... dr Display actual device name. What's going on? Is it a bug in stat, or some Darwin VFS weirdness? Postscript Per Andrew McGregor, try passing "%Sd" to stat for more weirdness. It lists some apparently arbitrary subset of files from CWD...

    Read the article

  • Windows 8: Clean Install Fails BEFORE country Screen

    - by Mark
    G'day, I've been trying to (clean) install Windows 8 on my PC now for over two weeks now, and it's really getting old. You can see here that I've outlined my issue to the Windows Technical Community, which has resulted in... absolutely no help at all. http://answers.microsoft.com/en-us/windows/forum/windows_8-windows_install/windows-8-clean-install-fails-before-country/64229d0a-0220-45a9-bdc6-c41062df8a75 Tl;dr? Yeah, me too. Basically, I've shrunk the HDD, it's working (I'm on that PC on XP). I've put both the x64 and the x86 DVD's, AND an another HDD with W8 installed on it from my test machine, and they ALL Fail to load. (I get the slanty windows logo, and after about 10 seconds BOOM. Sad face error screen.) I really don't want to have to remove the video card, or the unevenly/not partnered DIMM in the 2nd Memory channel - because the case is .. stupid, and in an awkward spot, but I'm running out of ideas! PS. I tried turning on ACHI tonight. All that resulted in was XP wigging out about new drivers & explorer.exe crashing. Fun times! :(

    Read the article

  • Computer locking up, looking for bootable hardware diagnostic tool.

    - by Carl Menke
    Well today I helped my friend build a computer. All went pretty well until we got to installing Win7. Thing is, we thought, it was crashing constantly. I adjusted pretty much every setting in the BIOS and removed as much hardware as possible to try and prevent a crash. No dice. So far I've tried running an Ubuntu live cd without the harddrive installed. Nope, crashed on boot. And then I just tried Microsoft's ram utility disk and it eventually locked up on that (the ram passed though). So it seems to me like it's either the CPU (AMD PhenomII x3) or the motherboard that could be bad, but I don't know how to test them individually for problems. I thought it could be a overheating issue, but the BIOS reports that the CPU temp is fine idling around 34C. Any advice or diagnostic disk that could help me out? TL;DR: Computer locks up frequently during use (cannot even boot/install an operating system), memory is fine, probably CPU or Mobo, BIOS says CPU temps are fine. What should I try?

    Read the article

  • Static noise in headphones

    - by John Murdoch
    I have a Asus P6T based system. I was using the on-board sound (plugging in Logitech X-230 2.1 analog speakers in the green "front speakers" 3.5mm analog output, then plugging in my headphones in that). I was quite happy with the sound quality (didn't hear any static noise if volume was turned down to my normal listening level). Then about a week ago I started having terrible static noise from the left channel, and no normal audio on that left channel. Right channel had more static noise than usual but did have a bit of sound. I tried using the AC'97 in front of my case but that seemed to have no signal. I decided my on-board sound card has gone bad and bought an internal sound card to replace it (Startech 7.1Ch PCI). This fixed the "no sound from left channel problem", but I had much more audible static noise. I decided the card was low quality and/or it had interference from all the other things happening inside the computer case, and bought a Sweex SC016 external USB sound card. But even with that I have static noise in headphones. Positioning the USB sound card differently doesn't seem to help. Trying the other analog outputs (e.g., surround) doesn't help. The static noise in all cases is proportional to the volume. I have tried different headphones, but the situation is situation though perhaps the flavour of the static noise changes slightly. So what are my options? a) Get another, more expensive, external USB sound card hoping the quality will improve? b) Get another, more expensive, internal sound card (PCIe 1x perhaps) hoping the quality will improve? c) Get a dedicated DAC box? d) Get some Hi-Fi earphones? Suggestions? tl;dr - Two different sound cards both still have static noise in headphones.

    Read the article

  • IIS 6 + ASP.NET web service - DW20 and stackoverflow exception

    - by pcampbell
    Consider an ASP.NET SOAP web service that starts up fine, but craters hard when receiving its first hit. Please note that this is deployment works in the Test environment, but not in the PreProd environment. Both are Windows 2003 SP3 + IIS 6 + ASP.NET 3.5. All up-to-date. The behaviour that we're seeing is: restart the site & app pool the app pool is configured to run under Network Service. browsing to the .asmx and .wsdl responds normally, as expected. send a normal well-formed SOAP request / normal payload to the web service 100% CPU usage after 5 seconds, the page request / site returns "Service Unavailable" no entry is created in the IIS log file (i.e. c:\windows\system32\logfiles\W3C-foo) the app pool ends up being stopped The processes that hit the CPU hard are dw20.exe. I am unsure why Dr Watson is involved here. Event Log shows an ASP.NET Runtime error: Task Manager: Event log text: EventType clr20r3, P1 w3wp.exe, P2 6.0.3790.3959, P3 45d6968e, P4 errormanagement, P5 1.0.0.0, P6 4b86a13f, P7 24, P8 0, P9 system.stackoverflowexception, P10 NIL. Questions Any thoughts on what this system.stackoverflow exception might be? Given that the code is the same between environments, might it be a payload problem? Could it be a configuration issue? You can see the name of my .NET assembly there in the exception message: "ErrorManagement"

    Read the article

  • How do i set up a fully featured small business network?

    - by JoshReedSchramm
    This has the possibility to be a very large question but I recently acquired a few rack mount servers and the hardware necessary to run them. Unfortunately I'm a programmer with very little understanding of how to set up a good working network so I'm hoping someone on here might be able to help. What I want to do is run a domain with a series of subdomains which would all be externally accessible. The setup would live inside my home and my internet connection is your run of the mill cable model (which means a dynamic IP) I want to be able to set up a couple site, specifically: www.mycompany.com (mycompany.com with no subdomain would redirect to this) build.mycompany.com (for my continuous integration server) ruby.mycompany.com (for ruby projects) win.mycompany.com (for windows project) etc. Additionally this is still my home network so our personal machines need to be able to get on via wifi with at least the same security we have now through an out of the box router from best buy. I'm thinking i need a DNS server, DHCP server and one of those would run either no-ip or dyndns to accommodate the dynamic ip. I don't necessarily need mail but it might be helpful to have some sort of mail server i could use for testing, it doesn't need to get out to the greater internet though. So how do i set up this kinda of network? tl;dr Need to know how to set up your standard office style network in my home off my normal consumer level cable modem connection.

    Read the article

  • VPN issue: SSTP Service service started and then stopped

    - by Ampersand
    When I was trying to set up a VPN connect on my laptop running Windows 7 Ultimate, I got this error: Network Connections Cannot load the Remote Access Connection Manager service. Error 711: The operation could not finish because it could not start the Remote Access Connection Manager service in time. Please try the operation again. I traced through some service dependencies and discovered that Secure Socket Tunneling Protocol Service was set to Manual. However, when I try to manually start the service, I get: Services The Secure Socket Tunneling Protocol Service on Local Computer started and then stopped. Some services stop automatically if they are not in use by other services or programs. Setting all the services involved to Automatic did not help. SSTP just showed Automatic and Stopped in the Services panel. I found a solution that involved booting in Safe Mode and deleting the contents of C:\Windows\System32\LogFiles\WMI\RtBackup. This solution worked, and I could set up a vpn connection, but only until I rebooted again. TL;DR I'm looking for a way to permanently enable Secure Socket Tunneling Protocol Service and other vpn-related services permanently so I don't have to reboot into safe mode and delete files every time I need to connect to a vpn.

    Read the article

  • Backup solution, or, how Duplicati duped me

    - by blarghmaster
    TL/DR version: Mono + Duplicati.commandline.exe restore etc. etc. spits this out for several files regardless of what I try. I am able to list sets, list files in said sets, even do a verify, but each time i do a restore of any kind, i get errors to the effect of : Failed to restore file: "snapshot/blahblah/2005-11-07.tar.gz", Error message: The partial file record for snapshot/blahblah/2005-11-07.tar.gz does not match the file Any advice here, or an idea of where to look for a better solution? FULL STORY: Ive recently put together an nice clean, friendly backup solution for several servers, predominantly Linux, but occasionally a windows box is added too. The solution as is meets all my requirements and does it well... save 1: cross-compatibility The solution is based on a combination of several elements, but eventually comes done to using Duplicity and Duplicati for the actual storage of files. The entire solution was ready to go before i realized that Duplicati, does not, in fact allow me to restore my files to a Linux box, regardless of what the commandline under Mono might tell you. It just spits out errors on random zip and image files, for apparently no good reason as i have tried several options to get it to restore, and several versions of Mono including installing it pretty much lib-for-lib. There is no effective log file for the reasons for these errors, and even the "--debug-output=true" flag does nothing. I am able to list sets, list files in said sets, even do a verify, but each time i do a restore of any kind, i get errors to the effect of : Failed to restore file: "snapshot/blahblah/2005-11-07.tar.gz", Error message: The partial file record for snapshot/blahblah/2005-11-07.tar.gz does not match the file Now i could most likely use the friendly instructions on Duplicati's site and script a bash equivalent of the restore, but that's not exactly ideal. Any advice on this? or possibly an alternative solution that presents the same benefits of Duplicati/Duplicity but that actually works across platforms?

    Read the article

  • Will these optimizations to my Ruby implementation of diff improve performance in a Rails app?

    - by grg-n-sox
    <tl;dr> In source version control diff patch generation, would it be worth it to use the optimizations listed at the very bottom of this writing (see <optimizations>) in my Ruby implementation of diff for making diff patches? </tl;dr> <introduction> I am programming something I have never done before and there might already be tools out there to do the exact thing I am programming but at this point I am having too much fun to care so I am still going to do it from scratch, even if there is a tool for this. So anyways, I am working on a Ruby on Rails app and need a certain feature. Basically I want each entry in a table of mine, let's say for example a table of video games, to have a stored chunk of text that represents a review or something of the sort for that table entry. However, I want this text to be both editable by any registered user and also keep track of different submissions in a version control system. The simplest solution I could think of is just implement a solution that keeps track of the text body and the diff patch history of different versions of the text body as objects in Ruby and then serialize it, preferably in human readable form (so I'll most likely use YAML for this) for editing if needed due to corruption by a software bug or a mistake is made by an admin doing some version editing. So at first I just tried to dive in head first into this feature to find that the problem of generating a diff patch is more difficult that I thought to do efficiently. So I did some research and came across some ideas. Some I have implemented already and some I have not. However, it all pretty much revolves around the longest common subsequence problem, as you would already know if you have already done anything with diff or diff-like features, and optimization the function that solves it. Currently I have it so it truncates the compared versions of the text body from the beginning and end until non-matching lines are found. Then it solves the problem using a comparison matrix, but instead of incrementing the value stored in a cell when it finds a matching line like in most longest common subsequence algorithms I have seen examples of, I increment when I have a non-matching line so as to calculate edit distance instead of longest common subsequence. Although as far as I can tell between the two approaches, they are essentially two sides of the same coin so either could be used to derive an answer. It then back-traces through the comparison matrix and notes when there was an incrementation and in which adjacent cell (West, Northwest, or North) to determine that line's diff entry and assumes all other lines to be unchanged. Normally I would leave it at that, but since this is going into a Rails environment and not just some stand-alone Ruby script, I started getting worried about needing to optimize at least enough so if a spammer that somehow knew how I implemented the version control system and knew my worst case scenario entry still wouldn't be able to hit the server that bad. After some searching and reading of research papers and articles through the internet, I've come across several that seem decent but all seem to have pros and cons and I am having a hard time deciding how well in this situation that the pros and cons balance out. So are the ones listed here worth it? I have listed them with known pros and cons. </introduction> <optimizations> Chop the compared sequences into multiple chucks of subsequences by splitting where lines are unchanged, and then truncating each section of unchanged lines at the beginning and end of each section. Then solve the edit distance of each subsequence. Pro: Changes the time increase as the changed area gets bigger from a quadratic increase to something more similar to a linear increase. Con: Figuring out where to split already seems like you have to solve edit distance except now you don't care how it is changed. Would be fine if this was solvable by a process closer to solving hamming distance but a single insertion would throw this off. Use a cryptographic hash function to both convert all sequence elements into integers and ensure uniqueness. Then solve the edit distance comparing the hash integers instead of the sequence elements themselves. Pro: The operation of comparing two integers is faster than the operation of comparing two strings, so a slight performance gain is received after every comparison, which can be a lot overall. Con: Using a cryptographic hash function takes time to convert all the sequence elements and may end up costing more time to do the conversion that you gain back from the integer comparisons. You could use the built in hash function for a string but that will not guarantee uniqueness. Use lazy evaluation to only calculate the three center-most diagonals of the comparison matrix and then only calculate additional diagonals as needed. And then also use this approach to possibly remove the need on some comparisons to compare all three adjacent cells as desribed here. Pro: Can turn an algorithm that always takes O(n * m) time and make it so only worst case scenario is that time, best case becomes practically linear, and average case is somewhere between the two. Con: It is an algorithm I've only seen implemented in functional programming languages and I am having a difficult time comprehending how to convert this into Ruby based on how it is described at the site linked to above. Make a C module and do the hard work at the native level in C and just make a Ruby wrapper for it so Ruby can make all the calls to it that it needs. Pro: I have to imagine that evaluating something like this in could be a LOT faster. Con: I have no idea how Rails handles apps with ruby code that has C extensions and it hurts the portability of the app. This is an optimization for after the solving of edit distance, but idea is to store additional combined diffs with the ones produced by each version to make a delta-tree data structure with the most recently made diff as the root node of the tree so getting to any version takes worst case time of O(log n) instead of O(n). Pro: Would make going back to an old version a lot faster. Con: It would mean every new commit, the delta-tree would get a new root node that will cost time to reorganize the delta-tree for an operation that will be carried out a lot more often than going back a version, not to mention the unlikelihood it will be an old version. </optimizations> So are these things worth the effort?

    Read the article

  • Xen PV packet loss

    - by Delphinator
    I'm having some serious issues with packetloss with one of my servers. This server is a somewhat old (P4-era) machine running Debian Squeeze and Xen 4.0. There are two domUs running on it (both also Debian Squeeze), one gateway and a fileserver. Unfortunatly the processor has no virtualization extensions, therefore only PV can be used. While investigating why our network seems to be slower than it should I found some pretty bad packet loss (~25%). After further investigation and several experiments I did a measurment between the dom0 and one of the domUs: Server listening on UDP port 5001 Receiving 1470 byte datagrams UDP buffer size: 110 KByte (default) ------------------------------------------------------------ ------------------------------------------------------------ Client connecting to dom0, UDP port 5001 Sending 1470 byte datagrams UDP buffer size: 110 KByte (default) ------------------------------------------------------------ [ 3] local 192.168.1.2(domU) port 33817 connected with 192.168.1.100(dom0) port 5001 [ 4] local 192.168.1.2(domU) port 5001 connected with 192.168.1.100(dom0) port 48606 [ ID] Interval Transfer Bandwidth [ 3] 0.0-10.0 sec 46.3 MBytes 38.7 Mbits/sec [ 3] Sent 33020 datagrams [ 3] Server Report: [ 3] 0.0-10.0 sec 46.2 MBytes 38.6 Mbits/sec 0.030 ms 89/33019 (0.27%) [ 3] 0.0-10.0 sec 1 datagrams received out-of-order [ 4] 0.0-10.2 sec 43.0 MBytes 35.3 Mbits/sec 13.074 ms 11575/42256 (27%) tl;dr: 27% packet loss from dom0 to domU with 50Mbit UDP packets. Same thing happens from anywhere in the network. The problem gets better for smaller bandwidths (0.047% for 5Mbit) and worse for higher (59% for 200Mbit) ones. I did increase the CPU-weight of the dom0, there is no swapping going on, and actual networking-hardware is not involved. I never expected Xen (or anything related) to drop packets, and I'm completly clueless what to try next.

    Read the article

  • Bugzilla Install question - I'm stuck

    - by Nabeel
    I run Bugzilla's checksetup.pl (migrating an older version), and it always returns: Reading ./localconfig... Checking for DBD-mysql (v4.00) ok: found v4.005 Had to create DBD::mysql::dr::imp_data_size unexpectedly at /usr/lib64/perl5/site_perl/5.8.8/x86_64-linux-thread-multi/DBI.pm line 1229, <DATA> line 225. Use of uninitialized value in subroutine entry at /usr/lib64/perl5/site_perl/5.8.8/x86_64-linux-thread-multi/DBI.pm line 1229, <DATA> line 225. Had to create DBD::mysql::db::imp_data_size unexpectedly at /usr/lib64/perl5/site_perl/5.8.8/x86_64-linux-thread-multi/DBI.pm line 1259, <DATA> line 225. Use of uninitialized value in subroutine entry at /usr/lib64/perl5/site_perl/5.8.8/x86_64-linux-thread-multi/DBI.pm line 1259, <DATA> line 225. There was an error connecting to MySQL: Undefined subroutine &DBD::mysql::db::_login called at /usr/lib64/perl5/site_perl/5.8.8/x86_64-linux-thread-multi/DBD/mysql.pm line 142, <DATA> line 225. MySQL Version: [root@bugzilla-core TMP]# mysql --version mysql Ver 14.12 Distrib 5.0.60sp1, for redhat-linux-gnu (x86_64) using readline 5.1 And mysql_config: [root@bugzilla-core TMP]# mysql_config Usage: /data01/mysql-5.0.60/bin/mysql_config [OPTIONS] Options: --cflags [-I/data01/mysql-5.0.60/include -g] --include [-I/data01/mysql-5.0.60/include] --libs [-rdynamic -L/data01/mysql-5.0.60/lib -lmysqlclient -lz -lcrypt -lnsl -lm -lmygcc] --libs_r [-rdynamic -L/data01/mysql-5.0.60/lib -lmysqlclient_r -lz -lpthread -lcrypt -lnsl -lm -lpthread -lmygcc] --socket [/tmp/mysql.sock] --port [0] --version [5.0.60sp1] --libmysqld-libs [-rdynamic -L/data01/mysql-5.0.60/lib -lmysqld -lz -lpthread -lcrypt -lnsl -lm -lpthread -lrt -lmygcc] Now, I've tried the latest version of DBD-mysql (4.0.14). I'm completely lost and stumped. I'm not sure where to go from here. Scouring the 'webs haven't returned anything fruitful. Any ideas?

    Read the article

  • Private staff network within public network

    - by pianohacker
    I'm the sysadmin at a small public library. Since I got here a few years ago, I've been trying to set up the network in a secure and simple way. Security is a little tricky; the staff and patron networks need to be separated, for security reasons. Even if I further isolated the public wireless, I'd still rather not trust the security of our public computers. However, the two networks also need to communicate; even if I set up enough VMs so they didn't share any servers, they need to use the same two printers at the very least. Currently, I'm solving this with some jerry-rigged commodity equipment. The patron network, linked together by switches, has a Windows server connected to it for DNS and DHCP and a DSL modem for a gateway. Also on the patron network is the WAN side of a Linksys router. This router is the "top" of the staff network, and has the same Windows server connected on a different port, providing DNS and DHCP, and another, faster DSL modem (separate connections are very useful, especially as we heavily depend on some cloud-hosted software). tl;dr: We have a public network, and a NATed staff network within it. My question is; is this really the best way to do this? The right equipment would likely make my job easier, but anything with more than four ports and even rudimentary management quickly becomes a heavy hit on our budget. (My original question was about an ungodly frustrating DHCP routing issue, but I thought I'd ask whether my network was broken rather than asking about the DHCP problem and being told my network was broken.)

    Read the article

  • Cannot find wireless driver for HP laptop

    - by rodey
    I have an HP laptop (model: dv7-1267cl, Windows 7 64-bit Home Premium) and I cannot find the wireless driver for the laptop. I am need of a different version because my wireless connection under Windows is flaky and unreliable. I have problems printing to my wireless printer and logging in to and using several sites like reddit.com, phoenix.edu and facebook.com - I get several "page cannot be displayed" messages while using my wireless connection. I disable my wireless adapter and use an ethernet cable and it all works fine. I also used an Ubuntu Live CD to confirm that there is not a problem with the hardware. This is software/driver issue. The drivers were auto installed by the OS. The Device Manager shows the wireless adapter as Atheros AR5009 802.11 a/g/n WiFi Adapter. I have checked the HP website for my laptop and they do not have wireless drivers listed for that model wireless adapter. I have also checked with atheros.com and I do not see my model adapter on their list of available hardware. Device Manager lists the Hardware ID's for my adapter as: PCI\VEN_168C&DEV_002A&SUBSYS_1381103C&REV_01 PCI\VEN_168C&DEV_002A&SUBSYS_1381103C PCI\VEN_168C&DEV_002A&CC_028000 PCI\VEN_168C&DEV_002A&CC_0280 A search for the first Hardware ID turned up this question from experts-exchange.com. tl;dr A driver does not exist for that model adapter.

    Read the article

  • IIS serving pages extremely slowly

    - by mos
    TL;DR: IIS 7 on WS2008R2 serves pages really slowly; everyone assumes it's because it's IIS and we should have gone with an Apache solution on Linux. I have no idea where to start debugging the problem. I work in a nearly all-MS shop with a bunch of fellow programmers who think Linux is the One True Way. Management recently added a Windows machine with IIS to serve Target Process (third-party agile system), but the site runs extremely slowly. Everyone, to a man, assumes it's because it's on IIS, and if only management would grow a brain and get some Linux servers in here, we could really start cleaning things up! ...Right. Everyone "knows" IIS isn't fit to serve .txt files. ...Well, as the only non-Microsoft hater in the bunch, I am apparently the only one who thinks maybe the Linux guy who hated being told to set up the IIS server may have screwed things up. I'd like to go fix it, but I don't have any clue as to where to start as I am not a sys admin. Help?

    Read the article

  • Cannot find wireless driver for HP laptop

    - by rodey
    I have an HP laptop (model: dv7-1267cl, Windows 7 64-bit Home Premium) and I cannot find the wireless driver for the laptop. I am need of a different version because my wireless connection under Windows is flaky and unreliable. I have problems printing to my wireless printer and logging in to and using several sites like reddit.com, phoenix.edu and facebook.com - I get several "page cannot be displayed" messages while using my wireless connection. I disable my wireless adapter and use an ethernet cable and it all works fine. I also used an Ubuntu Live CD to confirm that there is not a problem with the hardware. This is software/driver issue. The drivers were auto installed by the OS. The Device Manager shows the wireless adapter as Atheros AR5009 802.11 a/g/n WiFi Adapter. I have checked the HP website for my laptop and they do not have wireless drivers listed for that model wireless adapter. I have also checked with atheros.com and I do not see my model adapter on their list of available hardware. Device Manager lists the Hardware ID's for my adapter as: PCI\VEN_168C&DEV_002A&SUBSYS_1381103C&REV_01 PCI\VEN_168C&DEV_002A&SUBSYS_1381103C PCI\VEN_168C&DEV_002A&CC_028000 PCI\VEN_168C&DEV_002A&CC_0280 A search for the first Hardware ID turned up this question from experts-exchange.com. tl;dr A driver does not exist for that model adapter.

    Read the article

  • Anyone know a good mind mapper that works with a scheduler?

    - by GLycan
    TL;DR: Mind mapping tasks to be processed into a schedule based on task metadata. I have all sorts of ideas about what to invest resources (mainly time) in, but when I actually have time to do something I useually end up browsing reddit for not knowing what do to, and the frequancy with which I forget deadlines scares me. I'd love to bring order and structure into my mind, and always know what to do next. So, I want a mind mapping app, where I'd give each branch (types and subtypes of things I want to do) a importance score (if there were two branches, and one had 60 while the other 40, they would respectivily get 60% and 40% of the parent's importance, with the root being 100) and a how soon that branch should be revised/updated (an hobby I want to try out might be checked, say, once a week, while a school subject should be checked once a day) and give each leaf (something I want/need to do) how much time it takes, deadline (if any), and optionally an absolute importance, reoccurrence (guitar practice might repeat once a week), and prerequisites (reading something requires that book (although that could be brought somewhere), coding requires a box, jogging requires being outside) and maybe some other flags, like if it's enjoyable or not. It should either be packaged or working with a schedular app, to which I'd say, look, my day works this way (completely busy from 8 to 9:15, then 15 minutes of being inside with nothing, ..., two hours with box and possibility to go outside, etc), saying that such-and-such pattern is school and happens ever weekday except such-and-such days. The output should be of the form of a schedule, fit for printing or, when I finally get an android, mobile viewing, that schedules tasks with regards to availability of resources and importance (importance being derived from the leaf-task's parent branches), and the set of flags (all work and no play makes me a dull boy). One of these tasks should be reviewing anything that should be updated on that day, including future day layouts (e.g, if the time slots of future days have changed. This should be done every day.) Does anyone know some collection of preferably open-source (or free, or pirateable) tools, or better yet a single one, that accomplishes this task? I know python pretty well, and should be able to write any necessary glue.

    Read the article

  • Tomcat Solr times out

    - by user568458
    (Plesk 10.4 centos 5.8 linux apache2 server, with Tomcat5 on port 8080 and Apache Solr) I get "The connection has timed out" on requesting domain.com:8080 or www.domain.com:8080 or ip.ad.dr.ess:8080 Every reason I can find why this might be seems not to be the case: Plesk thinks Tomcat is running fine and lists it as an active service. The firewall currently has an accept all rule on port 8080. There's nothing relevant in the catalina tomcat logs (/var/log/tomcat5) - just some stuff from last time tomcat was started. There's no record at all of the requests that fail. netstat -lnp | grep 8080 gives the following, which I beleive means Tomcat is listening to requests to port 8080 on all ip addresses from any ip and any port (please correct me if I'm wrong): : tcp 0 0 0.0.0.0:8080 0.0.0.0:* LISTEN 4018/java This covers every cause of this time out that I can find - so I must be missing something fundamental. It seems Tomcat is running, listening to the right port, is getting an appropriate IP address, is not obstructed by a firewall and is not failing after receiving a request in a way which would be recorded in the logs (so I believe it can't be out of memory, or anything like that). I'm all out of ideas on how to continue debugging this. I must have overlooked something obvious. Can anyone help?

    Read the article

  • Ping and crawling not working, site still resolving

    - by Andrew Alexander
    Ok, so we're trying to figure out why the site of one of our clients isn't being crawled by Google (we've ruled out robots.txt or meta tags) When we go to the site, either IP address or domain name, the site resolves, everything works. However, Google is getting a 302 redirect (which it apparently isn't following for crawling), and when we ping the address, it times out (note, the site is still resolving in the browser throughout all of this). The site is built in ASP.Net (I assume C#) and so my thoughts were that it was an errant redirect rule, or some other sort of server side issue. We also thought that it might be due to incorrect domain pointing (but if we try to ping the IP, it doesn't work, so that sorta rules that out). We're really not sure what is causing all of these errors, or even if they have one single source. Anyone have any ideas what could be going on? Do you need any more information? To boil it down in a TL; dr: * Site resolving in browser, both IP and domain name. No problems here. * Site not being crawled by Google (gets a 302 it doesn't seem to follow) - it is not due to robots.txt or meta tags * Ping is not working for the IP address. This is very odd, because again, the IP address seems to work fine in the browser. * Our thoughts are either redirect rule issue, domain pointing issue, or possibly some errant code - or some combination of the three

    Read the article

  • Whats the easiest route to trying out mono 2.6?

    - by E J
    We have several web applications built on Microsoft technologies (asp.net+mvc framework, built using VS2008, MS SQL Server). I have recently be playing with Ubuntu (9.10), installed using Wubi, and wanted to see if I can get our apps running on a foss software stack. I have got the hang of the very basics of Postgresql and I have read that there is some support for Linq to SQL in mono (as of 2.6) as well as asp.net/MVC. However I am unsure how to go about getting Mono 2.6 up and running. Here is what I have discovered so far: Ubuntu is not meant for the 'cutting edge' it is designed to be stable hence, it sometimes takes a release cycle or two for new software to make it to the repositories Mono is already installed by default, but it is likely to stay at version 2.4 for at least the 10.4 release You can install paralell environments of Mono, if you know what your doing. I have had a go at setting up parallel environments, but haven't had any luck yet. (And TBH I am not certain that that will do what I think it's gonna do). (tl;dr start here) Is there a distribution of Linux similar enough to Ubuntu, that I wouldn't have to start the learning curve all over again, but that will let me install Mono 2.6, Postgresql, (and possibly mono-develop 2.4)? Or should I persist with Ubuntu?

    Read the article

  • Daisy-chainable DisplayPort Monitors

    - by thepurplepixel
    I am the proud new owner of a MacBook Pro with a mini-DisplayPort. My desk setup used to allow me to position the screen of my old MacBook beside an external monitor, essentially allowing dual-head. It was also advantageous that my old MBP and my external monitor had the same resolution, 1440x900. Now, I'm searching for a set of two monitors that I can use a HengeDock with. Unfortunately, the MacBook Pro suffers from having only one mini-DisplayPort. Looking up the spec for DisplayPort 1.2 (which the MBP supports), DisplayPort daisy chaining is supported. What I'm looking for is a monitor that has two DisplayPorts so I can daisy chain two monitors off the single mini-DisplayPort. What I don't want is a USB-based video solution. I need full acceleration on both monitors; an external video card won't cut it. I hope I don't have to wait a few years for these monitors to come out. TL;DR: I need two monitors with two DisplayPorts each that I can daisy-chain. Thanks!

    Read the article

  • vmware esxi 5, cant create snapshots and consolidate fails, how to delete old or consolidate redo logs?

    - by Scott Szretter
    I have a VM that seems to be working ok, but when VMWare DR (or I) tries to create a snap shot, it fails, and when I view the summary page of the VM it has a warning at the top showing that the disks need to be consolidated. So I go to snapshot manager for the VM and choose consolidate (in snapshot manager, there are no snapshots actually listed by the way). If fails with this error: This virtual machine has 255 or more redo logs in a single branch of its snapshot tree. The maximum supported limit has been reached, creating new snapshots will not be allowed. To create new snapshots, please delete old snapshots or consolidate the redo logs. If I browse the data store (which has plenty of free space, 2 TB and this vm is under 40gb), in the vm folder, I do in fact see a bunch of files, numbered all the way to 0255: myvm-000255-ctk.vmdk myvm-000255-delta.vmdk myvm-000255.vmdk How can I clean all this up? Is there an SSH command line command or can I delete some of the files safely? Thanks!

    Read the article

  • How should I configure nginx caching headers for a "baked" static file blog? (Octopress)

    - by Doug Stephen
    I recently deployed an Octopress blog (which is a blogging platform built around Jekyll). It's a static-site blog generator, with no dynamic content or databases to muck about with. It's being served up by nginx. My question is, what is the appropriate expires directive or Cache-Control header that I should set to make sure that visitors get the most up-to-date version of the site when they visit without having to manually refresh? Since the site is just .html files it seems to get cached pretty aggressively. I've tried a million different combinations of expires modified + xxxx and even straight up expires off but I can't seem to wrap my head around it. I'm very new to dealing with caching like this, specifically, on static files that change frequently, and obviously if the site hasn't been changed then I'd like for it to be served up out of the cache. Update (still not solved though): I found open_file_cache, tweaked that. Still no dice. It seems like what I might want to do is use nginx as a proxy cache and use Apache with ETags? Is there really no convenient way to make nginx play nicer with conditional requests from the client? TL;DR: I'm running a static-file blog and I'd like to set up nginx to only serve from the cache if the blog hasn't been updated recently, but I'm too stupid to figure it out myself because I'm relatively new to web servers.

    Read the article

  • TC hashing filters - single rule deletion

    - by exa
    For traffic shaping I'm currently using a setup that looks exactly like the setup from LARTC, on this page: http://lartc.org/howto/lartc.adv-filter.hashing.html I have a simple problem with that - everytime I want to modify something in the hash table (like assign a IP to different flowid), I need to delete the whole filter table and add it again filter by filter. (I actually don't do it by hand, I have a nice program that does it for me... but still...) There is a problem - I got roughly 10k filters allocated this way and deleting and refilling the whole filtertable can get pretty lengthy, which is not exactly good for traffic shaping. My program could easily manage to delete only the rules that need to be deleted (thus reducing the whole problem to several commands and miliseconds), but I simply don't know the command that deletes only the one hashing rule. My tc filter show: filter parent 1: protocol ip pref 1 u32 filter parent 1: protocol ip pref 1 u32 fh 2: ht divisor 256 filter parent 1: protocol ip pref 1 u32 fh 2:a:800 order 2048 key ht 2 bkt a flowid 1:101 match 0a0a0a0a/ffffffff at 16 filter parent 1: protocol ip pref 1 u32 fh 2:c:800 order 2048 key ht 2 bkt c flowid 1:102 match 0a0a0a0c/ffffffff at 16 filter parent 1: protocol ip pref 1 u32 fh 800: ht divisor 1 filter parent 1: protocol ip pref 1 u32 fh 800::800 order 2048 key ht 800 bkt 0 link 2: match 00000000/00000000 at 16 hash mask 000000ff at 16 The wish: 'tc filter del ...' command that removes only one specific filter (for example the 0a0a0a0a IP match (IP address 10.10.10.10)). Removal of some small subgroup would also be good - for example I could still recreate a bucket (bkt a) pretty fast. My attempts: I tried to number all the filters using prio, but with no help -- they just create something unusuable (but deletable) below, but the bucketed filters remain there after that gets deleted. Any ideas? edit - I'm adding a simplified tl;dr description of the problem: I created hash filter on some interfce just like in this http://lartc.org/howto/lartc.adv-filter.hashing.html I want to find a command that deletes one rule (e.g. 1.2.1.123) from the table, leaving the rest untouched and working.

    Read the article

  • Spreadsheet application that can handle big data OS X

    - by Peter
    I've been working with Excel for quite a while for some statistical analysis that I do regularly. The size of the data that I'm working with has gotten much larger as of late, however. The layout of the databases in question is quite simple, usually just three rows which includes a UNIX timestamp, and EST value, a proprietary numeric value and finally an average of the rows that have a timestamp +/- 1000 that row's timestamp (little AVERAGEIFS() formula). That formula and the EST conversion are the only formulas in the sheet. I'm beginning to work with files with 500,000+ rows. Running the average formula down the entire row takes forever. The end result is the production of print-worthy graphs. I'm looking for either a UNIX CL utility or separate spreadsheet/database application that can handle this amount of data without melting my CPU or making me wait an hour. Is there anything out there? TL;DR: Simple excel sheet with over half a million rows is getting too slow to work with. OS X alternatives?

    Read the article

< Previous Page | 41 42 43 44 45 46 47 48 49 50 51 52  | Next Page >