Search Results

Search found 14764 results on 591 pages for 'interview questions'.

Page 444/591 | < Previous Page | 440 441 442 443 444 445 446 447 448 449 450 451  | Next Page >

  • wireless router - configuring for low-latency, high traffic environment

    - by Mark C
    Hey all, I have a few questions about configuring a router to achieve low-latency, high speed throughput on a local area network that is not connected to the internet. I've read up on some stuff, but thought I would solicit some opinions here on what I've found and what I want to know.... Turn off SSID broadcast - it produces extraneous packets that all clients receive and reply (?) to. Not a huge deal, but it may help a bit. Mixed-mode off - I should attempt to have all devices using the same standard (e.g. 802.11n) and turn mixed-mode off. Any thoughts on security? Does having WEP or any of the WPA variants actually increase latency? Nothing super secure is going over this LAN so if turning security off made things better, that'd be cool. Any other thoughts or things to focus on to create the low latency environment I'm trying to go for would be great. Links to webpages and papers are also cool. I'm open to go through a bunch of stuff. Thanks in advance!

    Read the article

  • How to setup firewall to allow internet connection sharing via Wifi USB stick?

    - by hannanaha
    I have a Windows8 computer linked to the internet via an ethernet cable ("Ethernet" network connection). I have attached to it a DLink Wifi USB stick, and I'm trying to share the main PC's internet connection with my Android phone via a local wifi network. I am using the following batch file to set up this network: netsh wlan set hostednetwork mode=allow ssid=MyWifiName key=password keyUsage=persistent netsh wlan start hostednetwork After I run this script, I can see a new network connection appear in "Control Panel\Network and Internet\Network Connections" named "Local Area Connection *12", and I can see "MyWifiName" on the Android phone. The device name for this connection on the PC is "Microsoft Hosted Network Virtual Adapter". I also set up the "Ethernet" connection to share Internet with "Local Area Connection *12". However, the Android phone usually doesn't manage to obtain an IP from the wireless network, and when it does, there still seems to be no connectivity to the internet. When I turn off the Windows Firewall completely, or even just for "Local Area Connection *12", the Android connection is perfect. My questions are: How should I set up the Windows firewall to allow the phone to connect properly? Is there a specific rule I need to add to the Windows firewall advanced settings? [Note: the above method worked great in Windows 7, without any specific tinkering with the firewall]. Is it safe to turn off the firewall specifically for the "Local Area Connection *12" (the wifi connection) if the main Ethernet connection is still protected by the firewall? Thanks in advance.

    Read the article

  • Adjust output Brightness/Gamma/Colors in Gnome

    - by Mikee
    We have a desktop system running Ubuntu 8.04.4, and it is connected to a standard desktop LCD monitor. Unfortunately, in 8.04.4, the brightness of the image is cranked way up. It appears to be a graphics driver issue. Unfortunately, installing a newer GPU driver for this Intel GPU is very difficult to do. So, I am looking for a software (or config file?) solution to achieve this. Note: Ubuntu 9.10 and higher do not exhibit this issue, so this is not a hardware problem. Note: VNC-ing to this machine from another does not exhibit this issue either. Also, I installed "DisplayCalibrator.app", and it does not work very well (the app comes up, but the contents of the window are blank). Is there anything that I can add to the xorg.conf file to correct this issue? Also, this solution: http://superuser.com/questions/96539/adjust-contrast-and-brightness-in-ubuntu did not resove my issue. Thank you all for the help!

    Read the article

  • trouble running multiple domains on tomcat behind apache via mod_jk

    - by mkoryak
    I am having trouble setting up tomcat6 with 2 virtual hosts, behind apache2. if i have just one host defined in tomcat, and one jk worker, everything works fine. as soon as i define another jk worker and a corresponding tomcat host i get this error in jk.log: 9:3075328656] [info] ajp_connect_to_endpoint::jk_ajp_common.c (922): Failed opening socket to (69.164.218.75:8009) (errno=111) [Tue Feb 08 03:08:13 2011] [17159:3075328656] [error] ajp_send_request::jk_ajp_common.c (1507): (dogself) connecting to backend failed. Tomcat is probably not started or is listening on the wrong port (errno=111) [Tue Feb 08 03:08:13 2011] [17159:3075328656] [info] ajp_service::jk_ajp_common.c (2447): (dogself) sending request to tomcat failed (recoverable), because of error during request sending (attempt=2) [Tue Feb 08 03:08:13 2011] [17159:3075328656] [error] ajp_service::jk_ajp_common.c (2466): (dogself) connecting to tomcat failed. [Tue Feb 08 03:08:13 2011] [17159:3075328656] [info] jk_handler::mod_jk.c (2615): Service error=-3 for worker=dogself my tomcat server.xml looks like this: <Service name="Catalina"> <Connector port="8080" protocol="HTTP/1.1" connectionTimeout="20000" URIEncoding="UTF-8" redirectPort="8443" /> <Connector port="8009" protocol="AJP/1.3" redirectPort="8443" /> <Engine name="Catalina" defaultHost="dogself.com"> <Realm className="org.apache.catalina.realm.UserDatabaseRealm" resourceName="UserDatabase"/> <Host name="dogself.com" appBase="webapps-dogself" unpackWARs="true" autoDeploy="true" xmlValidation="false" xmlNamespaceAware="false"> </Host> <Host name="nousophia.com" appBase="webapps-test" unpackWARs="true" autoDeploy="true" xmlValidation="false" xmlNamespaceAware="false"> </Host> </Engine> </Service> my workers.properties looks like this: # workers.properties - ajp13 # # List workers worker.list=dogself,nousophia # Define dogself worker.dogself.port=8009 worker.dogself.host=dogself.com worker.dogself.type=ajp13 worker.nousophia.port=8009 worker.nousophia.host=nousophia.com worker.nousophia.type=ajp13 tomcat is started/restarted i followed these directions for setting it up: http://stackoverflow.com/questions/1765399/linking-apache-to-tomcat-with-multiple-domains can someone confirm that it would work as above?

    Read the article

  • Using Samba to share a folder from a Linux guest with a Windows host in VirtualBox

    - by AmV
    I would like to share a folder from a Linux Guest with a Windows host (with read and write access if possible) in VirtualBox. I read in these two links: here and here that it's possible to do this using Samba, but I am a little bit lost and I need more information on how to proceed. So far, I managed to set up two network adapters (one NAT and one host-only) and install Samba on the Linux guest, but now I have the following questions: What do I need to type in samba.conf to share a folder from the Linux guest? (the tutorial provided in one of the links above only explains how to share home directories) Are there any Samba commands that need to be executed on the guest to enable sharing? How do I make sure that these folders are only available to the host OS and not on the Internet? Once the Linux guest is setup, how do I access each of the individual shared folders from the Windows host? I read that I need to mount a drive on Windows to do this, but do I use Samba logins, or Linux logins, also do I use localhost? or do I need to set up an IP for this? Thanks!

    Read the article

  • Thunderbird 11.0.1 and Lightning 1.3: How do I propose a different time for a meeting?

    - by seaao
    This all happens on my x64 Linux workstation btw. tl;dr: My colleague invited some people and me for a meeting. The meeting was scheduled a week to late. I -wrongfully- accepted. How do I propose a new time? To explain a bit more in detail: I received a meeting request in my mailbox. Thunderbird is so nice to let me accept or reject it, and after I click the button to accept, it is directly added to my calendar. But when I double click on the meeting to edit it, I get a function-wise scaled-down version of the meeting: The only settings I can alter are whether I want reminders, and if I go at all. Trying to drag it to another day doesn't work either: My calendar behaves like read-only (which it isn't btw). There are several questions (without answer...) to be found on Stack Overflow and on the Thunderbird knowledge base about using lightning. But I get the idea that I'm one of the few who won't comply with the team even before the meeting is started. My googling revealed no bugs or feature requests in the direction I'm thinking. A link to an explanation how to achieve this, or another perspective about how to reach the desired goal (meeting with my colleagues and me) would be mostly welcome!

    Read the article

  • IP tables blocking access to most hosts but some accesses being logged

    - by epo
    What am I getting wrong? A while back I locked down my web hosting service while hardening it or at least trying to. Apache listens on port 80 only and I set up iptables using the following: IPS="list of IPs" iptables --new-chain webtest # Accept all established connections iptables -A INPUT --protocol tcp --dport 80 --jump webtest iptables -A INPUT --match state --state ESTABLISHED,RELATED --jump ACCEPT iptables -A webtest --match state --state ESTABLISHED,RELATED --jump ACCEPT for ip in $IPS; do iptables -A webtest --match state --state NEW --source $ip --jump ACCEPT done iptables -A webtest --jump DROP However looking at my apache logs I notice various log entries in access_log, e.g. 221.192.199.35 - - [16/May/2010:13:04:31 +0100] "GET http://www.wantsfly.com/prx2.php?hash=926DE27C156B40E55E4CFC8F005053E2D81E6D688AF0 HTTP/1.0" 404 206 "-" "Mozilla/ 4.0 (compatible; MSIE 6.0; Windows NT 5.0)" 201.228.144.124 - - [16/May/2010:11:54:16 +0100] "GET /w00tw00t.at.ISC.SANS.DFind:) HTTP/1.1" 400 226 "-" "-" 207.46.195.224 - - [16/May/2010:04:06:48 +0100] "GET /robots.txt HTTP/1.1" 200 311 "-" "msnbot/2.0b (+http://search.msn.com/msnbot.htm)" How are these slipping through? I don't mind the indexing bots (though I am a little surprised to see them get through). I suppose they must be getting through using the ESTABLISHED,RELATED rules. And no, I can't for the life of me remember why the first match state rule is there So 2 questions: is there a better way to set up iptables to restrict access to specified hosts? How exactly are these 3 examples slipping through?

    Read the article

  • Best way to build / implement a corporate developer Linux distro with multiple kernels?

    - by Garen
    At work we have Linux users who understandably prefer using Ubuntu. Problem is, we also have developer tools that only work with 'officially' supported Linux distributions that use much older 2.6.18 based kernels. (And even if they worked with newer ones, the vendors could always say they won't "support" the software unless it's on one of their 'officially' supported platforms.) We could of course just tell them to use CentOS or something else 2.6.18-based, and I'm sure their response would be something like: "you can take Ubuntu from our cold, dead hands." :) Which brings to me some questions--is there any good/easy/recommended way to run something like Ubuntu as a host VM and Centos 5.x as a guest OS (with which system--Xen,KVM,VMWare, ...?), and then roll that into our own custom internal distribution that could be easily installed? KVM looks like a good high-performance option just recently included in RHEL 5.4, but if hardware support for virtualization like Intel-VT or AMD-V is necessary, then I'd guess only those folks with fairly new PCs will be able to do it. Would be very interested to hear how anyone else has addressed this kind issue. EDIT: The target audience / users of this kind of system would be developers, each one needs to run locally licensed commercial software, so building out some separate beefy central machines isn't an option unfortunately due to license restrictions. Even if that weren't the case, a couple developers could quickly eat up the resources with parallel builds. :) Ideally, I was hoping there was some step-by-step guide out there to build your own pre-built distribution that had e.g. CentOS 5.x and Ubuntu Desktop as a guest.

    Read the article

  • Sun Power Button Won't Shut Down System

    - by user36680
    Background: We are running NIS and have NFS mounts from a Solaris 10 workstation to a Solaris 8 server. If the workstation loses its network connection for some reason, when I look at the workstation's console I see repeated messages of the form: <date> <time> <hostname> ypbind[<pid>]: NIS server not responding for domain "<domain>"; still trying. If I try to login at the console as a user, it won't work because it can't authenticate my account through NIS. Also, it won't return to a login prompt again, so I can't log in as root. If I press the power button (don't hold it in) on the workstation, I see: <date> <time> <hostname> power: WARNING: Power off requested from power button or SC, powering down the system! Shutdown started. <date> <time> Changing to init state 5 - please wait. <date> <time+2 minutes> <hostname> power: WARNING: Failed to shut down the system! And continue to see messages of the form: <date> <time> <hostname> ypbind[<pid>]: NIS server not responding for domain "<domain>"; still trying. So, the questions are How do I make NIS stop trying (because I know it will fail)? Why won't it shut down?

    Read the article

  • Are SATA II and SATA 3.0 Gbps compatible?

    - by Johnny Maelstrom
    I am trying to check that if I buy a new internal HDD it will work in the NAS I am buying. Currently I'm confused about naming schemes and once that is resolved whether there is compatibility. I will gladly author this question to be more general if there is not already an article helping with the confusion of SATA naming and standards. I see similar, but not identical questions and will accept this as a duplicate if thought as such. The specifications on the eCommerce site for the NAS says, "Controller Interface Type Serial ATA-150", the product home page for the manufacturer says, "Compatible with SATA and SATA II HDD". The specifications on the eCommerce site for the hard drives say, "Interface Type Serial ATA-300", the product home page for the manufacturer says, "Interface SATA 3.0 Gbps" Wikipedia says many things about different naming conventions, the closest being, "SATA II 3.0 Gbit/s, which was colloquially referred to as "SATA 3G" [bps] or "SATA 300" [MB/s] since 1.5 Gbit/s SATA I and 1.5 Gbit/s SATA II were referred to as both "SATA 1.5G" [b/s] or "SATA 150" [MB/s]). Therefore, they will operate with negligible differences between them." Are SATA II and SATA 3.0 Gbps the same? I feel I'm tantalisingly close to getting a definitive answer here before I purchase, but really want to clear up these naming schemes.

    Read the article

  • Things to check for an internet-facing email server.

    - by Shtééf
    I'm faced with the task of setting up a public-internet-facing email server, that will be relaying mail for all of our other servers in the network. While the software in itself is set up in few keystrokes, what little experience I have with managing an email server has thought me that there are tons of awkward filtering techniques employed by other email systems. Systems that my own server will inevitably interact with a some point. Hence, my questions: What things should be kept in mind and double checked when setting up an email server? What resources are available for checking if my email server is set-up correctly? I'm specifically NOT looking for instructions for any given mail server, such as Exchange or Postfix. But it's okay to say: “you should have X and Y in your set-up, because when talking to server software Z, it typically tries to weed out open relays by checking for these.” Some things I've discovered myself: Make sure forward and reverse DNS are set up. Mail servers tend to do a reverse lookup for the peer IP-address when receiving. Matching a reverse look up with a follow-up forward lookup is probably employed to weed out open relays run through malware on home networks. Make sure the user in the From-address exists. The From-address is easily spoofed. A receiving mail server may try to contact the mail server in the From-domain, and see if the From-user actually exists.

    Read the article

  • Can a Windows Domain play along with a Hosted Exchange service?

    - by benzado
    I'm setting up a computer network for a small (10-20 people) company. They are currently using a Hosted Exchange service they are totally happy with. Other than that, they are starting from scratch (office doesn't even have furniture yet). They will need some kind of file sharing server set up in their office. If I set up a machine as a file server and nothing more, users will have three passwords to deal with: local machine, file server, and email. If I set up a Domain Controller, identities for local machine and file server will be the same. But what about the Hosted Exchange server? Must the users have a separate email password, or is it possible to combine the two? (I realize it might depend on the specific hosting provider, but is it possible?) If not, it seems like I have these options: Deal with it: users have a separate email password. Host Exchange on the local server: more than they want to manage in-house? Purchase a hosted VPS, make it part of the domain, and host Exchange there. (Or can/should a VPS be a domain controller?) I realize I have a lot of questions in there. The main one: is there any reason to use a Hosted Exchange service if I'm setting up other Windows services?

    Read the article

  • vSphere - datastore falling off a host

    - by Chadddada
    Recently we have been running the vCheck powershell script daily in order to help in monitoring our vSphere ESX 4.0 environment. One of the oddities that we have been seeing is that some of the datastores on the SAN don't always show up on every host. Our hosts are connected redundantly, via FC, to some brocade FC switches, which then connect via fiber to our EMC Ax4 SAN. While all the datastores are presented to each host we have, and they see them initially, they sometimes seem to fall off and are no longer visible. It easy enough to rescan for datastores and add them back to the hosts the hosts but this seems to be an error. Has anyone else seen this or know why it may be happening? Responses to questions: 1. Is it always the same ESX servers that lose their connection? – Scott Warren No this happens randomly on random hosts. If a VM is running on a particular host, of which the VM's disks are on a SAN datastore, then that datastore won't disappear. It seems to happen if a host doesn't touch a datastore for a bit and it just forgets about it.

    Read the article

  • How to diagnose storage system scaling problems?

    - by Unknown
    We are currently testing the maximum sequential read throughput of a storage system (48 disks total behind two HP P2000 arrays) connected to HP DL580 G7 running RHEL 5 with 128 GB of memory. Initial testing has been mainly done by running DD-commands like this: dd if=/dev/mapper/mpath1 of=/dev/null bs=1M count=3000 In parallel for each disk. However, we have been unable to scale the results from one array (maximum throughput of 1.3 GB/s) to two (almost the same throughput). Each array is connected to a dedicated host bust adapter, so they should not be the bottleneck. The disks are currently in JBOD configuration, so each disk can be addressed directly. I have two questions: Is running multiple DD commands in parallel really a good way to test maximum read throughput? We have noticed very high SWAPIN-% numbers in iotop, which I find hard to explain because the target is /dev/null How shoud we proceed in trying to find the reason for the scaling problem? Do you thing the server itself is the bottleneck here, or could there be some linux parameters that we have overlooked?

    Read the article

  • Custom Domain for Google App Engine and Google Apps

    - by Kevin
    I have set up and configured Google App Engine and Google Apps to use my custom domain with a cname 'www'. I have configured my DNS (via fasthosts.co.uk) with the cname and pointed it to ghs.google.com. I can access the website using the google app engine domain at capel-y-crwys.appspot.com but I can't access it via my custom domain www.capelycrwys.org.uk. I have allowed several days for propagation of the DNS etc. The really strange this is I can access the app via my custom domain when I use the web browser on my Android mobile phone. I can't access the app via my custom domain from my home internet connection, my work internet connection or a friends internet connection. I tried a few online web proxies and I can access the app via the custom domain. I posted this question on the google forums code.google.com/appengine/forum/?place=topic%2Fgoogle-appengine%2FfUP-G_0FKE4%2Fdiscussion and a commentor has said he could access the app via the custom domain. So why can't I access it direct via my home internet connection etc? I've tried loads of google searching and even found a similar sounding post here on serverfault serverfault.com/questions/208461/custom-domain-name-server-not-found-google-app-engine-and-google-apps but it doesn't have an answer that helps me.

    Read the article

  • Ubuntu 12.10, Unity, AMD 12.11 beta drivers, AMD APP SDK 2.7 and OpenCL detection of multiple gpus

    - by junkie
    I'm using Ubuntu 12.10, AMD 12.11 beta drivers, AMD APP SDK 2.7 and OpenCL. I have three amd radeon 7990s plugged in each of which are a dual 7970 so I have six gpus altogether. I plan to go up to eight in a few days. Windows couldn't use even 4 but linux works fine with 6 so far. The strange thing is that the six gpus are only detected by OpenCL in unity (the ubuntu default window manager). If I switch to e17, blackbox or fluxbox or anything else for that matter OpenCL only detects one. I'm using a simple OpenCL program to list all devices to check. I've also checked the output of aticonfig --list-adapters, fglxinfo and clinfo. The first two always show six in all window managers wheras clinfo shows 6 in unity but 1 gpu in all other WMs. I'm also using an X config generated by aticonfig --initial -f --adapter=all. I'm also only using one monitor. I've also checked using lsmod that the fglrx module is loaded in all WMs. So I have two questions. Why does OpenCL see six gpus only in unity? How can I enable six gpus on other lightweight WMs? Basically I'm getting at what determines how many gpus the OpenCL runtime sees? Thanks.

    Read the article

  • External HDD incorrectly detected as internal - how change to enable hot swap/eject?

    - by Sam
    Hi All, I have win 7 x64 Home Prem. The HDD is a seagate barracuda, 7200.7 ST3120827AS. 3.5", Serial: 3ms006n6, Firmware: 3.42 (no further updates) NexStar CX External case (drivers installed). I have three drives: WD320 with OS installed WD750 data storage (internal) seagate 120 (external) - connected via esata board connected to sata on motherboard (MSI p43 neo) Tried uninstalling HDD in device manager to no effect. Also the internal WD750 is detected as an external drive and win taskbar icon allows for it to be ejected (unlike the seagate). All drives are configured - Online, Simple, Basic, NTFS, Active, Primary Partition (except c drive). The seagate was previously used as a primary disk with XP operating system so I deleted the volume and created/reformatted (not quick). HDD is no longer "Active". But did not fix problem. Background Originally, I installed win 7 with the bios set to IDE and forgot to install the chipset drivers. Then I changed win 7 to install the AHCI drivers, changed the bios to AHCI and rebooted. Win 7 loaded drivers but WD HDD gave problems/crashed. I installed chipset drivers and latest intell storage matrix software thingie (in safe mode). Everything worked fine after that except for the problem of not corrrectly detecting the external drive] I have noticed that under the driver properties (and similarly in the registry) the two drives are configured differently (e.g. in driver details property capabilities for the WD the value is set to 0000006, CM_DEVCAP_REMOVABLE & EJECTSUPPORTED - whereas the seagate shows 0000080 & CM_DEVCAP_SURPRISEREMOVALOK). Any easy way to configure things? I tried physically swapping the sata connections on the mainboard without success So far I have found that a solution to my problem might be to perform some reg changes: http://superuser.com/questions/12955/how-do-i-remove-the-option-to-eject-sata-drives-from-the-windows-7-tray-icon

    Read the article

  • How to handle files that don't need version control in mercurial

    - by richardh
    I am new to mercurial, and for the most part do LaTeX reports and statistical calculations in R using .csv and/or .sqlite files. Re LaTeX, all I really care is the .tex file. Re R, I don't need version control on the .csv or .sqlite files because they are static. When I do 'hg add' for a repo with a .csv and/or .sqlite file, I get a warning like: rev2.sqlite: up to 3070 MB of RAM may be required to manage this file (use 'hg revert rev2.sqlite' to cancel pending addition) So I revert and subsequently use adds like hg add -X *.sqlite. I guess I really have two questions: (1) Should I ignore these warnings? Because these large files are static, can I just add to the repo knowing that the diff files will always be empty and not worry about wasted resources? (2) If I should keep excluding these files from the repo, is there away that I can fix this option? I.E., add to my .hgrc file something that always appends an option like -I *.tex -I *.R to my 'hg add' commands? Thanks!

    Read the article

  • Windows 8 using as a webserver

    - by Jason
    I have a few hobby websites that I currently host on CentOS 6. Apache, mail serving, PHP, MySQL nothing special. In the past I used Windows XP to do this same task, for years, and I was OK. I switched to Linux and for the last few years it has been such a pain. updates break, certain apps only support certain distros without compiling from source. It prevents me from working on my hobby sites more because I am always fixing something. With Windows I locked it down, I run a hardware firewall and packet analyser, kept up on updates and A/V and never had a problem. I dont allow RDC from outside the local LAN, no FTP open, run OpenSSH on an obscure port.. I am considering switching to Windows 8 (since it is a cheaper license now that Windows 7) and running apache, HMailServer, PHP, MySQL, just like my CentOS install. My questions: I am not familiar with Windows 8, can the above be done like XP? No new security restrictions or the OS preventing this from happening? The machine is a Athlon 64-bit X2 with 32GB of RAM. Will Windows 8 see all of the RAM? Technically the machine came with Windows 7, and there is a serial number on it but I am sure I wiped away the Windows 7 recovery partition when I switched to Linux....

    Read the article

  • Corrupted NTFS Drive showing multiple unallocated partitions

    - by volting
    My external hdd with a single NTFS partition was accidentaly plugged out (kids!)... and is now corrupted. Iv tried running ntfsfix - with no luck - output below.. When I look at the disk under disk management in Windows 7 it shows up as having 5 partitions 2 of which are unallocated - none have drive letters and it is not possible to set any (that option and most others are greyed out) - so I can't run chkdsk /f Iv tried using Minitool partition wizard which was mentioned as a solution to another similar question here. It showed the whole drive as one partition, but as unallocated, and the option -- "Check File System" was greyout. Is there anything else I could try ? Output of fdisk -l Disk /dev/sdb: 1500.3 GB, 1500299395072 bytes 255 heads, 63 sectors/track, 182401 cylinders, total 2930272256 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytest I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x69205244 This doesn't look like a partition table Probably you selected the wrong device. Device Boot Start End Blocks Id System /dev/sdb1 ? 218129509 1920119918 850995205 72 Unknown /dev/sdb2 ? 729050177 1273024900 271987362 74 Unknown /dev/sdb3 ? 168653938 168653938 0 65 Novell Netware 386 /dev/sdb4 2692939776 2692991410 25817+ 0 Empty Partition table entries are not in disk order Output of ntfsfix me@vaio:/dev$ sudo ntfsfix /dev/sdb Mounting volume... ntfs_mst_post_read_fixup_warn: magic: 0xffffffff size: 1024 usa_ofs: 65535 usa_count: 65534: Invalid argument Record 0 has no FILE magic (0xffffffff) Failed to load $MFT: Input/output error FAILED Attempting to correct errors... ntfs_mst_post_read_fixup_warn: magic: 0xffffffff size: 1024 usa_ofs: 65535 usa_count: 65534: Invalid argument Record 0 has no FILE magic (0xffffffff) Failed to load $MFT: Input/output error FAILED Failed to startup volume: Input/output error Checking for self-located MFT segment... ntfs_mst_post_read_fixup_warn: magic: 0xffffffff size: 1024 usa_ofs: 65535 usa_count: 65534: Invalid argument OK ntfs_mst_post_read_fixup_warn: magic: 0xffffffff size: 1024 usa_ofs: 65535 usa_count: 65534: Invalid argument Record 0 has no FILE magic (0xffffffff) Failed to load $MFT: Input/output error Volume is corrupt. You should run chkdsk. Options available with MiniTool: Related questions: How to fix a damaged/corrupted NTFS filesystem/partition without losing the data on it? Repair corrupted NTFS File System

    Read the article

  • How to transform a csv to combine matching rows?

    - by Christian Wolf
    I have a CSV file with some transaction data. Let's say date, volume, price and direction (sell/buy). Additionally there is a ID for each transaction and on each closing transaction (the newer one) there is a reference to the corresponding transaction. Classical database referencing. Now I want to do some statistics and draw some plots. This could be done via Octave, LaTeX/TikZ, Gnuplot or whatever. To do this I need both buy and sell price in one row. My thought was to preprocess the CSV to get another CSV containing the needed information and then to do the statistics. In the end I'd like to have a solution based on scripts and not on a spreadsheet as data might change often (exported from online DB). My actual solution (see http://paste.ubuntu.com/6262822/ ) is a bash script that parses the CSV line by line and checks if there exists a corresponding transaction. If found, a new row is written to the destination CSV. If not a warning is printed. The bad news: For each row in the source file I have to read the whole file a few times. This causes long running times of 10sec for 300 lines. As the line number might rise soon (10k lines), this is not perfect. I am aware, that there are many shells to be opened in the script which might cause the performance problems. Now my questions: Is bash/awk/sed/.... a good way to do things? Should I first import all data into a "real" local database to use SQL? Is there an easy way to achieve the desired results?

    Read the article

  • NAT and P2P router crash

    - by returnFromException
    So..i had this argument with my networks teacher. He said that some people complains about router crashes due to many entrys on NAT tables on a router. I didnt understand and i asked: "If the application uses the same port, why does the router crash?. It should have only one entry (pc-ip,pcport;public-ip,public-port)". And he said: "it doesnt matter its using the same port". I got the idea that NAT creates an entry for every packet that passes trought it. Iam assuming NAT with overloading as you might have guessed. So the questions are: 1-How does nat entrys are created? On a packet basis or connection basis? I mean: suppose i send a udp packet..does the router create an entry? 2-When i start a TCP connection, does the router create a persistant nat entry until the connection closes? 3-Was my teacher right? The NAT table can overload assuming an aplication on the same port sending packets? Thanks in advance.

    Read the article

  • Removing Paths/ Landing Pages From SharePoint Search Results

    - by j.strugnell
    Hi there, We've been asked by a client to remove a number of pages from being shown up in their public website search results page. I've been into the SSP and created Crawl Rules to remove these pages. All seemed to have worked ok but we have an issue in that landing pages are still showing up in their "www.domain.com/sitearea/" form but not in their "www.domain.com/sitearea/pages/default.aspx". For each of this type of page we have created one rule to "Exclude" the "aspx" path and another rule to include the "/" path but to "Follow links on the URL without crawling the URL itself". We tried adding rules to exclude the "/" format but that only resulted in all results underneath that being excluded. Does anybody know how to remove the "area/pages/default.aspx" and the "area/" pats from Search Results? I'm not sure if it's the "done thing" to ask 2 questions in one but this is in a similar vein so it should be ok. I was wondering if anyone knew of a tool (or if it is possible) to allow site admins to exclude pages from search results (not via SSP/Crawl Rules). I know they can do it at the site level but I was wondering if anything out there enabled this to be done at the page level through either Page or Site Settings?

    Read the article

  • Windows 7 access denied to executables.. by what?

    - by stijn
    Ever since I started using Windows 7 this problem has been bothering me. From time to time I see similar questions popping up on misc forums, but never did I see an answer. Here are two scenarios that nearly always reproduce it: the explorer way with explorer, navigate to a directory containing at least one exe file go one directory up immediately delete the directory just navigated to yields Folder Acces Denied dialog stating You need permission to perform this action You require permission from Administrators to make changes to this folder, with the buttons try Again and Cancel hitting Try Again never works immediately. Waiting a minute or so and then clickig it again does work note: if in step 2 and waiting a minute or more before going up one directory, the problem does not occur and the folder can be deleted the visual studio way build a project producing an exe file run the executable then close it immediately build the project again (by changing a single character in a source file for example) yields fatal error LNK1168: cannot open /path/to/the.exe for writing note: if in step 2 and waiting a minute or more before building again, the problem does not occur some specs happens both on Windows 7 32 and 64 bit, with VS2008/2010/2011 happens on 3 different machines I do not have a virusscanner of any kind I do have a bunch of services disabled, but nothing that prevents Windows from running normally, UAC is disabled as well happens on any type of disc I always use a user account that is in the Administrators group Obviously both scenarios are very similar and extremely reproducable. So I figured some process must have the file open for some reason, and release it again later. However, using systinternal's handle -a the exe file in question never shows up. (that is the correct way to use handle, right?) So while explorer/VS are reporting they cannot access the file, handle.exe says it's not in use anywhere. This leaves me rather clueless, so I'm wondering if someone can come up with a solution: why does this happen, and how to solve it?

    Read the article

  • Connections to SSH and Samba suffer from heavy delay

    - by Till Helge Helwig
    There are a lot of questions about SSH connections being delayed, which usually can be fixed by disabling the DNS lookups. Unfortunately this doesn't seem to be my problem. Our development server is accessed via SSH and Samba. When opening a connection to the server (either SSH or Samba) it takes a very long time. Accessing a Samba share via Windows is basically impossible because there is a timeout. Using smbclient works, but takes ages. When opening a SSH connection I get immediately prompted for the password and after hitting Enter the terminal instantly shows the MOTD. Afterwards it takes about a minute for the prompt to appear. I watched the load of the server while connecting via SSH and Samba and could not find anything out of order. There is nothing out of the ordinary running and hogging up memory and CPU or something. I have no clue where this delay might come from. I already tried UseDNS no in sshd_config and proxy_dns = no in smb.conf, but to no avail. Any idea about what might cause this would be greatly appreciated!

    Read the article

< Previous Page | 440 441 442 443 444 445 446 447 448 449 450 451  | Next Page >