Search Results

Search found 14850 results on 594 pages for 'full decent'.

Page 522/594 | < Previous Page | 518 519 520 521 522 523 524 525 526 527 528 529  | Next Page >

  • IPv6 routing to another interface

    - by Robert
    I'm trying to get an IPv6 enabled router to forward data from one interface to the other and I'm having issues. When following this example (http://www.cisco.com/en/US/tech/tk872/technologies_configuration_example09186a0080ba6106.shtml) I am able to get full connectivity between all 3 routers in my simulator. However when I try to use only 1 router; I can't get connectivity to the other interfacs on the same router. My PC is directly attached to FA 0/1 and it can ping the router's interface. However it can not ping any other interface on the router(which unless I'm missing something it should be able to do). The router on the other hand can ping everything. I thought static routes might help; but the router already has routes for everything. I'm thinking the packet should come in; router looks up the destination in it's ipv6 routing table and then realizes it's for itself, and should respond. I thought maybe it couldn't respond directly; so I tried pinging a device like 2001:0000:0000:1000::2, but i don't get a response. I'm running on IOS 12.4. I'm missing something(hopefully simple), but I just can't see what it is. With only 1 router; how do I enable my PC to talk to the other subnets? Thank you in advance, Robert Topology: R1 FA 0/0: 2001:0000:0000:0000::1/52 FA 0/1: 2001:0000:0000:1000::1/52 FA 1/0: 2001:0000:0000:2000::1/52 Loopback 0: 2001:0000:0000:3000::1/52 PC: 2001:0000:0000:2000::2/52 PC plugs directly into FA 1/0 on the router. --- Configuration --- ipv6 cef ipv6 unicast routing interface Loopback0 no ip address ipv6 address 2001:0000:0000:3000::1/52 ipv6 enable ! interface FastEthernet0/0 no ip address duplex auto speed auto ipv6 address 2001:0000:0000::1/52 ipv6 enable ! interface FastEthernet0/1 no ip address duplex auto speed auto ipv6 address 2001:0000:0000:1000::1/52 ipv6 enable ! interface FastEthernet1/0 no ip address duplex auto speed auto ipv6 address 2001:0000:0000:2000::1/52 ipv6 enable --- end of config --- --- routing table --- IPV6Lab#show ipv6 route IPv6 Routing Table - 10 entries Codes: C - Connected, L - Local, S - Static, R - RIP, B - BGP U - Per-user Static route I1 - ISIS L1, I2 - ISIS L2, IA - ISIS interarea, IS - ISIS summary O - OSPF intra, OI - OSPF inter, OE1 - OSPF ext 1, OE2 - OSPF ext 2 ON1 - OSPF NSSA ext 1, ON2 - OSPF NSSA ext 2 C 2001:0000:0000::/52 [0/0] via ::, FastEthernet0/0 L 2001:0000:0000::1/128 [0/0] via ::, FastEthernet0/0 C 2001:0000:0000:1000::/52 [0/0] via ::, FastEthernet0/1 L 2001:0000:0000:1000::1/128 [0/0] via ::, FastEthernet0/1 C 2001:0000:0000:2000::/52 [0/0] via ::, FastEthernet1/0 L 2001:0000:0000:2000::1/128 [0/0] via ::, FastEthernet1/0 C 2001:0000:0000:3000::/52 [0/0] via ::, Loopback0 L 2001:0000:0000:3000::1/128 [0/0] via ::, Loopback0 L FE80::/10 [0/0] via ::, Null0 L FF00::/8 [0/0] via ::, Null0 --- end of routing table ---

    Read the article

  • Alternative Windows Offline Files + Windows Backup + Previous Version Setup

    - by Herson
    Currently our documents are all hosted in a Windows 7 box. Users can access the files using Windows share and the documents are available offline (windows 7 feature). The documents are being backed up daily by Windows 7 backup and restore utility. Users can access previous versions of the file (from the backups) using Windows Explorer "previous versions" feature. This setup is currently working well, except for the following: We would prefer to have access to hourly versions of the file, not daily. The previous version mechanism is tied up to the backup mechanism. Windows 7 performs a full backup every week and incremental backup everyday. The previous versions of a file is actually what are the available in the backups. If you 20GB documents and want to maintain at least three(3) year history, you will use at minimum 3 years * 52 weeks * 20GB or about 3TB even if there are few changes in the documents. Its pretty inefficient use of space. Looking up previous versions of a file is very slow (tens of minutes). This is probably related to the previous issue - Windows has to traverse its all of its backups. I am considering using SVN + autocommit/autoupdate tortoisesvn. It will have the following advantages: Backups are easy and will also backup the whole history of each documents. (Just backup the repository). Creating previous versions can be frequent. I think svn commit / update can be done every two minutes or so. Users can sync over the net. However, I can see the following issues: More conflicts than the original setup because both multiple users can now edit the same file even both are online, i.e. can connect to the SVN repo. The users can off course lock the file first before editing, but that would mean they have to adjust. Delay on propagation of file changes. On windows 7 file sharing, changes made by one online user will be instantaneously available to other online users. With the SVN setup, changes will only be propagated when the users execute the svn add/commit/update sequence. Delay will be probably a few minutes. This workflow will no longer work: "Hi, I just edited document X, can you have a quick look?" I would like to ask the opinion of the community for alternative setups, or improvements on the above setups to work out the kinks.

    Read the article

  • Triple (3) Monitors under Linux

    - by widgisoft
    I have a 3 monitor setup (each 1680x1050) via an Nvidia NVS440 (2 GPUs, 2 outputs per GPU totalling 4 outputs); this works fine under Windows XP,7 but caused considerable headaches under Linux (Ubuntu 9.04). I had previously used an XFX 9600GT and the onboard XFX 9300GS to produce the same result but the card was noisy and power hungry and I was hoping that there was some magical switch in the NVS4400 that got rid of this annoying problem - turns out the NVS440 is just 2 cards on one physical PCB :-p (I searched the net high and low for people using this card under Linux but found nothing, if anything the card uses less power and is fan less so I was to benefit from it either way) Anyway, using either set up there were 5 solutions available: Have 3 separate X instances, all un joined Have 3 separate X instances, adjoined by Xinerama Have 2 separate X instances - One using twin-view, both adjoined by Xinerama Have 2 separate X instances - One using twin-view but no Xinerama Have a single Twin-view setup and leave the 3rd screen unplugged :-p The 4rd option, using 2 separate X instances and twinview (but no xinerama) was the best balance in terms of performance and usability but caused 2 really annoying issues You couldn't control (without altering the shortcuts) which screen an application opened onto - and once it was opened you couldn't move it to another screen without opening up terminal and forcing it to move Nvidia's overriding or falsifying of Xinerama breaks and the 2 screens joined by Twin view behave like a single huge screen causing popups to open in the middle of both screens and maximising of windows stretches to the width of the first 2 screens Firefox can only run one instance as the same user so having multiple firefox windows requires at least 2 users The second option "feels" like the right option, but OpenGL is basically disabled and playing any sort of game or even running anything graphical causes a huge performance drop and instability - even trying to run a basic emulator for gba or gens just causes the system to fall over. It works just enough to stare at your desktop and do nothing but as soon as you start doing some work - opening windows, dragging things around - running multiple copies of firefox it just really feels slow. The last open, only going dual screen works perfectly and everything performs as required, full GPU acceleration - two logical screen spaces - perfect, just make it work across GPUs like windows! :-p Anyway, I know RandR was supposed to pick up the slack when it would introduced GPU objects of sorts to allow multiple GPUs to be stitched together to create one huge desktop at a much deeper layer than Xinerama. I was wondering if this has now been fixed (I noticed X server 1.7 is out) and whether anyone has got it running successfully? Again, my requirements are: One huge desktop to drag any window across Maximising of windows to each screen (as XP does) Running fullscreen apps on the primary screen and disabling the mouse from moving onto the others or on all 3 stretched Finally as a side note; I am aware of the Matrox triple (and dual) head splitter but even the price they go for on eBay is more than I can afford atm, my argument: I shouldn't have to buy extra hardware to get something to work on Linux when it's something that's existed in the windows world for a long time (can you tell I don't get on with X :-p); If I had the cash I'd have bought the latest version of this box already (the new version finally supports large resolutions as the displays I have 1680x1050 each).

    Read the article

  • Windows 7 & Virtual PC and Internet (gateway) problems on host PC

    - by Mufasa
    I upgraded to Windows 7 on a PC that is a few years old. The CPU was one revision away from having Hyper-V on it. So, I had to install Microsoft Virtual PC 2007 (v6.0.156.0) to run full XP instances instead of the seamless XP virtualization that is advertised so much. That's fine though; the 'older' version is useful since I use it to run different versions of the whole XP/IE stack for testing. (I'm a web developer.) ...And for the one 16-bit application we still use at the office for scheduling. * sigh * The virtual instances work fine, including networking. My issue is that after a reboot or coming out of sleep mode, my host Windows 7 won't connect to the Internet. It will connect to the local network fine. If I disable the "Virtual Machine Network Services" item (I'll call "VMNS" from here on) in the LAN Connection properties box, it starts working. But than the Virtual PC instances lose their network connectivity. If I re-enable VMNS again in the same instance, everything works (Internet on host and in the virtualized instances). But after the next reboot/sleep cycle this starts over. The route table gave me a clue though. When doing a cycle w/ VMNS enabled: IPv4 Route Table =========================================================================== Active Routes: Network Destination Netmask Gateway Interface Metric 0.0.0.0 0.0.0.0 On-link 10.0.3.51 20 0.0.0.0 0.0.0.0 10.0.10.10 10.0.3.51 276 ... After VMNS is disabled, the first route goes away. I assume that is for VMNS to intercept virtualized instance's network connections and forward them correctly? Just a guess though. More info: I checked my Firewall settings and Services (because I'm sort of a control nazi and turn off a lot) but couldn't find anything that made sense and if turned on changed anything. So it might be something there I'm missing, but I don't know what. My current hacked solution: So, I figured I'd mess with the routes myself to see if that helped, it did. If I run a route delete 0.0.0.0 on the universal (0.0.0.0) gateway routes, and add back in just the 2nd line with route add 0.0.0.0 mask 0.0.0.0 10.0.10.10--the one that points to my actual gateway (10.0.10.10)--then I don't have to mess with the disable/enable cycle of VMNS, and everything works. Running those two commands is faster then bringing up connection options and disabling and re-enabling VMNS, but I still don't want to have use that hack script every boot either. (Oh, and I also tried messing with hard-coding TCP/IP settings in my network adapter, including setting high metrics, etc., but that didn't help either.) Any suggestions on the right way to fix this?

    Read the article

  • A gigabit network interface is CPU-limited to 25MB/s. How can I maximize the throughput?

    - by netvope
    I have a Acer Aspire R1600-U910H with a nForce gigabit network adapter. The maximum TCP throughput of it is about 25MB/s, and apparently it is limited by the single core Intel Atom 230; when the maximum throughput is reached, the CPU usage is about 50%-60%, which corresponds to full utilization considering this is a Hyper-threading enabled CPU. The same problem occurs on both Windows XP and on Ubuntu 8.04. On Windows, I have installed the latest nForce chipset driver, disabled power saving features, and enabled checksum offload. On Linux, the default driver has checksum offload enabled. There is no Linux driver available on Nvidia's website. ethtool -k eth0 shows that checksum offload is enabled: Offload parameters for eth0: rx-checksumming: on tx-checksumming: on scatter-gather: on tcp segmentation offload: on udp fragmentation offload: off generic segmentation offload: off The following is the output of powertop when the network is idle: Wakeups-from-idle per second : 61.9 interval: 10.0s no ACPI power usage estimate available Top causes for wakeups: 90.9% (101.3) <interrupt> : eth0 4.5% ( 5.0) iftop : schedule_timeout (process_timeout) 1.8% ( 2.0) <kernel core> : clocksource_register (clocksource_watchdog) 0.9% ( 1.0) dhcdbd : schedule_timeout (process_timeout) 0.5% ( 0.6) <kernel core> : neigh_table_init_no_netlink (neigh_periodic_timer) And when the maximum throughput of about 25MB/s is reached: Wakeups-from-idle per second : 11175.5 interval: 10.0s no ACPI power usage estimate available Top causes for wakeups: 99.9% (22097.4) <interrupt> : eth0 0.0% ( 5.0) iftop : schedule_timeout (process_timeout) 0.0% ( 2.0) <kernel core> : clocksource_register (clocksource_watchdog) 0.0% ( 1.0) dhcdbd : schedule_timeout (process_timeout) 0.0% ( 0.6) <kernel core> : neigh_table_init_no_netlink (neigh_periodic_timer) Notice the 20000 interrupts per second. Could this be the cause for the high CPU usage and low throughput? If so, how can I improve the situation? As a reference, the other computers in the network can usually transfer at 50+MB/s without problems. A computer with a Core 2 CPU generates only 5000 interrupts per second when it's transferring at 110MB/s. The number of interrupts is about 20 times less than the Atom system (if interrupts scale linearly with throughput.) And a minor question: How can I find out what is the driver in use for eth0?

    Read the article

  • Can Remote Desktop Services be deployed and administered by PowerShell alone, without a Domain in WIndows Server 2012 and 2012 R2?

    - by Warren P
    Windows Server 2008 R2 allowed deployment of Terminal Server (Remote Desktop Services) without a domain, and without any insistence on domains. This was very useful, especially for standalone virtual or cloud deployments of a server that is managed remotely for a remote client who has no need or desire for any ActiveDirectory or Domain features. This has become steadily more and more difficult as Microsoft restricts its technologies further and further in each Windows release. With Windows Server 2012, configuring licensing for Remote Desktop Services, is more difficult when not on a domain, but possible still. With Windows Server 2012 R2 (at least in the preview) the barriers are now severe: The Add/Remove Roles and Features wizard in Windows Server 2012 R2 has a special RDS deployment mode that has a rule that says if you aren't on a domain you can't deploy. It tells you to create or join a domain first. This of course comes in direct conflict with the fact that an Active Directory domain controller should not be the same machine as a terminal server machine. So Microsoft's technology is not such much a Cloud Operating System as a Cluster of Unwanted Nodes, needed to support the one machine I actually WANT to deploy. This is gross, and so I am trying to find a workaround. However if you skip that wizard and just go check the checkboxes in the main Roles/Features wizard, you can deploy the features, but the UI is not there to configure them, and when you go back to the RDS configuration page on the roles wizard, you get a message saying you can not administer your Remote Desktop Services system when you are logged in as a Local-Computer Administrator, because although you have all admin priveleges you could have (in your workgroup based system), the RDS configuration UI will not accept those credentials and let you continue. My question in brief is, can I still somehow, obtain the following end result: I need to allow 10-20 users per system to have an RDS (TS) session. I do not need any of the fancy pants RDS options, unless Microsoft somehow depends on those features being present. I believe I need the "RDS Session Host" as this is the guts of "Terminal Server". Microsoft says it is "full Windows desktop for Remote Desktop Services client. I need to configure licensing so that the Grace Period does not expire leaving my RDS non functional, so this probably means I need a way to configure TS CALs. If all of the above could technically be done with the judicious use of the PowerShell, I am prepared to even consider developing all the PowerShell scripts I would need to do the above. I'm not asking someone to write that for me. What I'm asking is, does anyone know if there is a technical impediment to what I want to do above, other than the deliberate crippling of the 2012 R2 UI for Workgroup users? Would the underlying technologies all still work if I manipulate and control them from a PowerShell script? Obviously a 1 word Yes or No answer isn't that useful to anyone, so the question is really, yes or no, and why? In the case the answer is Yes, then how.

    Read the article

  • What are the most likely bottlenecks determining the performance of CamStudio screen recording?

    - by Steve314
    When doing screen recording, I can get a frame rate of maybe 15 frames per second for the full screen on my 1080p monitor using the XVID codec. I can increase the speed a bit by recording a region, changing screen modes, and tweaking other settings, but I'm curious what hardware upgrades might give me the biggest bang for my buck. My PC is budget, but modern... Athlon 2 X4 645 (3.1GHz, quad core, limited cache) processor. 4GB single channel DDR3 1066 RAM. ASRock motherboard with NVidia GeForce 7025/nForce 630a Chipset. ATI Radeon HD 5450 graphics card - 512MB on board, not configured to steal system RAM. I dual-boot Windows XP and Windows 7. For the moment, XP is my bigger performance concern as it's still my getting-things-done O/S as opposed to my browser-host O/S. My goal is to make a few programming-related tutorials. For a lot of that I don't need screen recording - I can make up some slides, record audio with the PC switched off, yada yada. When I do need screen recording, I'll mostly be recording Notepad++, Visual Studio or a command prompt. Occasionally, I may be recording some kind of graphics or diagram program and using my pre-Bamboo cheap Wacom tablet - I have the CS2 versions of Photoshop and Illustrator, but I'd much more likely be using Microsoft Paint. Basically, what I'll be recording won't be making huge demands on the machine - but recording a fair number of pixels (720p preferred) will be useful. What's particularly wierd - not so long ago I still had a five-year-old Pentium 4 based PC. And (with the same 1080p monitor) it could record at not far from the same frame rate. So clearly the performance issues are more subtle than just throw-money-at-it. My first guess would be that the main bottleneck is the bandwidth for transferring data to/from the graphics card. Is that likely to be correct? In support of that, see this [Radeon HD 5450 review][1] - the memory bandwidth is only 12.8 GB/s. If you can't get data out of graphics memory quickly, you can't transfer it back to the system memory quickly. Apparently, that's slower than some top-end cards in 2002.

    Read the article

  • Can only bring up one of two interfaces

    - by mstaessen
    I'm having a bizarre issue with my HP Proliant DL 360 G4p server. It has two gigabit ethernet interfaces but I can bring up only one of them. This is starting to freak me out and that's why I turned here. I'm running the x64 ubuntu 11.10 server edition. lshw -c network shows that the second interface is disabled. I have no idea why ans how to enable it. $ sudo lshw -c network *-network:0 description: Ethernet interface product: NetXtreme BCM5704 Gigabit Ethernet vendor: Broadcom Corporation physical id: 2 bus info: pci@0000:02:02.0 logical name: eth0 version: 10 serial: 00:18:71:e3:6d:26 size: 100Mbit/s capacity: 1Gbit/s width: 64 bits clock: 66MHz capabilities: pcix pm vpd msi bus_master cap_list ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=tg3 driverversion=3.119 duplex=full firmware=5704-v3.27b, ASFIPMIc v2.36 ip=10.48.8.x latency=64 link=yes mingnt=64 multicast=yes port=twisted pair speed=100Mbit/s resources: irq:25 memory:fdf70000-fdf7ffff *-network:1 DISABLED description: Ethernet interface product: NetXtreme BCM5704 Gigabit Ethernet vendor: Broadcom Corporation physical id: 2.1 bus info: pci@0000:02:02.1 logical name: eth1 version: 10 serial: 00:18:71:e3:6d:25 capacity: 1Gbit/s width: 64 bits clock: 66MHz capabilities: pcix pm vpd msi bus_master cap_list ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=tg3 driverversion=3.119 firmware=5704-v3.27b latency=64 link=no mingnt=64 multicast=yes port=twisted pair resources: irq:26 memory:fdf60000-fdf6ffff If I try to ifup eth1, then I get $ sudo ifup eth1 Ignoring unknown interface eth1=eth1. I figured that's what happens when there is no eth1 listed in /etc/network/interfaces. But when I add the configuration for eth1, I still can't ifup. $ sudo ifup eth1 RTNETLINK answers: File exists Failed to bring up eth1. I've also tried ifconfig eth1 up but without any result. For clarity, I have added a masked version of /etc/network/interfaces. I don't think it is the cause of the problem though. $ cat /etc/network/interfaces # This file describes the network interfaces available on your system # and how to activate them. For more information, see interfaces(5). # The loopback network interface auto lo iface lo inet loopback # The primary network interface auto eth0 iface eth0 inet static address 10.48.8.x netmask 255.255.255.y network 10.48.8.z broadcast 10.48.8.t gateway 10.48.8.u auto eth1 iface eth1 inet static address 193.190.253.x netmask 255.255.255.y network 193.190.253.z broadcast 193.190.253.t gateway 193.190.253.u I really need some help fixing this. It's driving me crazy. Thanks.

    Read the article

  • iTunes and Hulu Playback Choppy and Slow?

    - by Bart Silverstrim
    Specs: Windows XP, latest updates 1.7 ghz Pentium 4 1 gig ram DirectX 9.0c NVIDIA GeForce FX 5200 with 256 meg RAM OpenGL 2.1 The story: Okay, I had an older system laying around that I figured I would try turning into a mini-media system to connect to our TV. I put together a lot of older parts, got it into working order, etc. and hooked it up and voila'...slower, but usable system that displayed to the TV. It could run some things decently. I put in iTunes, it played video okay. Not great, but okay. Played Hulu and since we have a 1Mb download rate, the minimum for their site, there were some choppy moments when watching their shows, but I found that (sadly) changing resolution to 800x600 seemed to help with the issue when running full screen. I downloaded the application called Boxee and installed it. It wouldn't run; apparently the video card in the system supported OpenGL 1.2, and needed at least 1.4. I bought a cheap card, the 5200, with four times the memory in it and support for OpenGL 2.1. Installed, everything seemed fine. iTunes seemed to run fine, the video driver (PNY video card) came with OpenGL 2.1, and Boxee finally ran. I then upgraded to the latest drivers for the video card and ran the DirectX updater from MS. After that, the OpenGL Extension Viewer wouldn't run. It just stayed as an icon in the task bar. Also, any and all videos in iTunes stuttered and went out of sync horribly. Unwatchable. I tried watching Hulu video in Boxee, and it displayed video like it was a series of stills in a very bad powerpoint. Playing straightforward audio-only came through fine, no stutters no hiccups. I tried system restore to roll back updates to pre-directX updates (I thought that seemed to be the time that triggered the weird behavior), no joy. I tried uninstalling and reinstalling the video drivers. I installed updated audio drivers (ensoniq audiopci), nothing helped. I finally wiped the drive last night and tried reinstalling everything and restoring my iTunes content via an import from a backup. Fresh install, no updater on the video card or directx. the problem was still there although I haven't tested Hulu, the iTunes player is still stuttering like crazy if I play video, fine if I play audio. I know the processor isn't high in heft, but with one gig of RAM and the fact that it seemed to do okay before I thought that the problem must be software related. Has anyone else run into this sort of issue and have a solution other than "buy a new computer"? What specs seem to work with video at the low end for you? Right now the system is of little use other than keeping my music library and iTunes apps synced with my iPod.

    Read the article

  • Enterprise IPv6 Migration - End of proxypac ? Start of Point-to-Point ? +10K users

    - by Yohann
    Let's start with a diagram : We can see a "typical" IPv4 company network with : An Internet acces through a proxy An "Others companys" access through an dedicated proxy A direct access to local resources All computers have a proxy.pac file that indicates which proxy to use or whether to connect directly. Computers have access to just a local DNS (no name resolution for google.com for example.) By the way ... The company does not respect the RFC1918 internally and uses public addresses! (historical reason). The use of internet proxy explicitly makes it possible to not to have problem. What if we would migrate to IPv6? Step 1 : IPv6 internet access Internet access in IPv6 is easy. Indeed, just connect the proxy in Internet IPv4 and IPv6. There is nothing to do in internal network : Step 2 : IPv6 AND IPv4 in internal network And why not full IPv6 network directly? Because there is always the old servers that are not compatible IPv6 .. Option 1 : Same architecture as in IPv4 with a proxy pac This is probably the easiest solution. But is this the best? I think the transition to IPv6 is an opportunity not to bother with this proxy pac! Option 2 : New architecture with transparent proxy, whithout proxypac, recursive DNS Oh yes! In this new architecture, we have: Explicit Internet Proxy becomes a Transparent Internet Proxy Local DNS becomes a Normal Recursive DNS + authorative for local domains No proxypac Explicit Company Proxy becomes a Transparent Company Proxy Routing Internal Routers reditect IP of appx.ext.example.com to Company Proxy. The default gateway is the Transparent Internet proxy. Questions What do you think of this architecture IPv6? This architecture will reveal the IP addresses of our internal network but it is protected by firewalls. Is this a real big problem? Should we keep the explicit use of a proxy? -How would you make for this migration scenario? -And you, how do you do in your company? Thanks! Feel free to edit my post to make it better.

    Read the article

  • git private server error: "Permission denied (publickey)."

    - by goddfree
    I followed the instructions here in order to set up a private git server on my Amazon EC2 instance. However, I am having problems when trying to SSH into the git account. Specifically, I get the error "Permission denied (publickey)." Here are the permissions of my files/folders on the EC2 server: drwx------ 4 git git 4096 Aug 13 19:52 /home/git/ drwx------ 2 git git 4096 Aug 13 19:52 /home/git/.ssh -rw------- 1 git git 400 Aug 13 19:51 /home/git/.ssh/authorized_keys Here are the permissions of my files/folders on my own computer: drwx------ 5 CYT staff 170 Aug 13 14:51 .ssh -rw------- 1 CYT staff 1679 Aug 13 13:53 .ssh/id_rsa -rw-r--r-- 1 CYT staff 400 Aug 13 13:53 .ssh/id_rsa.pub -rw-r--r-- 1 CYT staff 1585 Aug 13 13:53 .ssh/known_hosts When checking my logs in /var/log/secure, I used to get the following error message every time I tried to SSH: Authentication refused: bad ownership or modes for file /home/git/.ssh/authorized_keys However, after making a few permission changes, I no longer get this error message. Despite this, I am still getting the "Permission denied (publickey)." message every time I try to SSH. The command I am using to SSH is ssh -T git@my-ip. Here is the full log I get when I run ssh -vT [email protected]: OpenSSH_6.2p2, OSSLShim 0.9.8r 8 Dec 2011 debug1: Reading configuration data /etc/ssh_config debug1: /etc/ssh_config line 20: Applying options for * debug1: Connecting to my-ip [my-ip] port 22. debug1: Connection established. debug1: identity file /Users/CYT/.ssh/id_rsa type -1 debug1: identity file /Users/CYT/.ssh/id_rsa-cert type -1 debug1: identity file /Users/CYT/.ssh/id_dsa type -1 debug1: identity file /Users/CYT/.ssh/id_dsa-cert type -1 debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_6.2 debug1: Remote protocol version 2.0, remote software version OpenSSH_6.2 debug1: match: OpenSSH_6.2 pat OpenSSH* debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr [email protected] none debug1: kex: client->server aes128-ctr [email protected] none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Server host key: RSA 08:ad:8a:bc:ab:4d:5f:73:24:b2:78:69:46:1a:a5:5a debug1: Host 'my-ip' is known and matches the RSA host key. debug1: Found key in /Users/CYT/.ssh/known_hosts:1 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey debug1: Next authentication method: publickey debug1: Trying private key: /Users/CYT/.ssh/id_rsa debug1: Trying private key: /Users/CYT/.ssh/id_dsa debug1: No more authentication methods to try. Permission denied (publickey). I have spent a few hours going through threads on various sites, including SO and SF, looking for a solution. It seems that the permissions for my files are all okay, but I just can't figure out the problem. Any help would be greatly appreciated. Edit: EEAA: Here are the outputs you requested: $ getent passwd git git:x:503:504::/home/git:/bin/bash $ grep ssh ~git/.ssh/authorized_keys | wc -l grep: /home/git/.ssh/authorized_keys: Permission denied 0

    Read the article

  • Single domain name potentially resolving to multiple servers

    - by Jace
    first time here at Server Fault, and I apologize in advance that this domain stuff is not really my strength. Any and all suggestions are much appreciated. I am completely lost and incredibly tired! I've inherited an incredibly convoluted system from my predecessor, and I'm trying to find a way to solve it - or I need to be told that it just isn't possible. I've got an old site on ServerA (some kind of Linux distribution), with the domain SomeDomain.com There is a new site sitting on ServerB (Ubuntu), with the intention of having SomeDomain.com to serve it in the future (it is replacing the old site) ServerA also has a web app that is currently in use by other departments within the company (accessible at SomeDomain.com/web-app/) The goal: To have SomeDomain.com and all extensions of this domain name (sub-domains, URL's etc.) serve the new site on ServerB. BUT, the URL SomeDomain.com/web-app/ must serve the Web App on ServerA. The Catch: The ServerA is a shared server with a hosting company with VERY limiting restrictions in place - I cannot adjust DNS settings (apart from Name servers - but cannot set A records or anything, I have full access to ServerB to do as I wish). Therefore the web-app MUST be served from SomeDomain.com/web-app/ and not from a sub-domain or anything. These limitations make migrating the web-app from Server A to Server B rather undesirable, AND this web-app will be replaced in the near future, so it isn't worth the effort right now. Therefore, ultimately I will want 1 domain name to resolve to Server B's IP address most of the time, but in the event that the URL is SomeDomain.com/web-app/, it should resolve to Server A's IP. Note: The domain names don't, technically, have to resolve to one IP or another - but ultimately the URL's must stay consistent Some things I have tried: I've looked into mod_rewrite and .htaccess to try and achieve this effect, but it doesn't look like it's going to work for me - but I may have done it wrong (On Server B, I just checked if the request URI was /web-app/ and tried to serve the /web-app/ folder on Server A) I do have the ability to modify the name servers on both servers I am not able to make a sub domain on Server A that points back to Server A (I assume because the hosting company's servers use the URL to determine what site the serve). I figured this could be good as I'd could set an A record on Server B to point to the web app on Server A - but alas, Server A requires SomeDomain.com. If there is any more information I can give, please let me know. I need a nudge in the right direction, ideas or a solution.

    Read the article

  • PC will POST whenever feels likes it

    - by kyrpas
    I'm really sick of my PC and I'd love to throw it off the 5th floor but unfortunately I don't have this luxury right now. The issues started when I moved to a new house about 2 months ago. I didn't have this problem before. Case: Arctic Cooling Silentium T1 with embedded Fusion 550 Eco 80 PSU. M/B: ASRock A790GMH/128M Gfx: ATI Radeon HD 5770 Here's what's happening almost on a daily basis: I wake up in the morning, switch on the PC and all the fans start spinning. 9/10 the graphics fan stays on 100% and I know it won't post. If I'm lucky, ATI's fan stays on full power for a second, then goes back to normal and I get a normal post but that doesn't happen often. No, instead it's just drives me crazy. When I get no POST I'm trying a lot of different things and what bothers me the most is that they all work. But not always. No... That way I could find out what the hell is going on and we don't want that.. right? So, sometimes it manages to POST if I: remove the keyboard remove the power cable for a few minutes remove the graphics card remove the HDD cables do nothing, just turn it on and off a few times Sometimes it doesn't POST even if I do all of the above. And I end up removing all power cables from the M/B, and connecting all the stuff one by one. Sometimes it works, sometimes it doesn't and I just have to pray and wait. What the hell is that? I'm getting pissed of again just thinking about it. The only solution is to leave it on 24/7 but I don't want to do that. It should be able to turn on and off when I press the power button. I'm not asking much. I'm starting to think there's some weird electricity/power issue but I really don't understand what it is. There's no logical explanation about it. At least I can't find one. Any ideas?

    Read the article

  • I can't delete a directory inside a junctioned directory

    - by Fredy Muñoz
    So this is the deal. A couple of days ago I moved my profile folder C:\Documents and Settings\fmunoz to a different drive D:\fmunoz. Today, I created a directory in my desktop using the point-and-click method: Right-click on an empty space in the desktop Select New Select Folder Leave the default name New Folder and press Enter I tried to delete the folder using the point-and-click method: Right-click the New Folder directory Select Delete After five seconds, I got the following message: --------------------------- Error Deleting File or Folder --------------------------- Cannot delete New Folder: Access is denied. Make sure the disk is not full or write-protected and that the file is not currently in use. --------------------------- Initially I thought that there must be some sort of indexing services locking the directory so I got a list of open files using the TuneUp Process Manager tool but the New Folder directory wasn't there. I double-clicked My Computer, navigated to the desktop directory C:\Documents and Settings\fmunoz\Destkop, tried to delete the New Folder directory using the same point-and-click method described above and got exactly the same message at the same amount of time. In the same window, I navigated to the actual location of the desktop directory D:\fmunoz\Desktop, tried to delete the New Folder directory and this time it worked. I thought that this behavior was due to some special treatment that Windows gives to the desktop or the profile directories so I tried doing the same thing with a different set of directories: Created a folder D:\dummy Created a junction C:\dummy pointing to D:\dummy Created a New Folder directory in C:\dummy Tried to delete New Folder from C:\dummy. Didn't work. Tried to delete New Folder from D:\dummy. It worked. I tried creating the folder in the actual directory rather than the junction directory: Created a New Folder directory in D:\dummy Tried to delete New Folder from C:\dummy. Didn't work. Tried to delete New Folder from D:\dummy. It worked. I also tried using the Delete button instead of using the Delete option of the context menu but it didn't work. When using the Shift+Delete sequence, it works. It also works by using the rd command in the console, but in both cases the deleted directory doesn't goes to the Recycle Bin, which is my intention when using the Delete context menu option or the Delete button.

    Read the article

  • Reviews Cheyney Group Marketing: What accounting softwares are available in the market for small businesses?

    - by user225556
    Accounting is the language of business, and good accounting software can save you hundreds of hours at the business equivalent of Berlitz. There's no substitute for an accounting pro who knows the ins and outs of tax law, but today's desktop packages can help you with everything from routine bookkeeping to payroll, taxes, and planning. Each package also produces files that you can hand off to an accountant as needed. Small-business managers have more accounting software options than ever, including subscription Web-based options that don't require their users to install or update software. Many businesses, however--including those that need to track large inventories or client databases, and those that prefer not to entrust their data to the cloud--may be happier with a desktop tool. We looked at three general-purpose, small-business accounting packages: Acclivity AccountEdgePro 2012 (both the product and the company were previously called MYOB), Intuit QuickBooks Premier 2012, and Sage's Sage 50 Complete 2013 (the successor to Peachtree Complete). All three packages offer a solid array of tools for tracking income and expenses, invoicing, managing payroll, and creating reports. These full-featured and highly mature programs don't come cheap. Acclivity AccountEdge Pro, at $299, is the least expensive; and prices climb if you opt to use common time-saving add-ons such as payroll services, or if you add licenses for multiple user accounts. All three are solid on the basics, but they have distinct differences in style and focus. The more you know about your accounting requirements, the more closely you'll want to look at the software you're thinking of buying. Sage 50 Complete should appeal most to people who understand the fine points of accounting and can use the product's many customization features (especially for businesses that manage inventory). QuickBooks works hard to appeal to newbies who need only the basics and might be intimidated by the level of detail and technical language exposed in the other two packages. At the same time, it also has a slew of third-party add-ons that meet specific needs and greatly expand its capabilities. AccountEdge Pro balances accessibility with a strong feature set at an affordable price. It's especially suitable for businesses that need to provide simultaneous access to multiple users.

    Read the article

  • Should I choose KVM/XEN over OpenVZ or use them together?

    - by Krystian
    I've got a dual xeon e5504 server, with [for now] only 8GB of ram. Storage is'n impressive either: 3x 146GB sas in raid5 + 500GB sata drives. Currently it works as a development server, but it's over speced for our needs and since our development methods changed through last 2 years we decided it will work as a production system for some of our applications + we would like to have a separate system for testing/research. Our apps are mainly web apps deployed on tomcats [plural as some of the apps require older versions] and connected to Postgres. I would like to have a production system, where only httpd+tomcat+db are setup and nothing else runs there. Sterile system. Apart from that, I would like a test system, where I can play with different JVM settings, deploy my test apps, play with tomcat/httpd settings and restart them without interfering with the production system. Apart from that, I would like to be able to play with different linux flavors, with newer kernels to test how they work etc. I know, this is not possible with OpenVZ and I would have to choose KVM for that. I am thinking about merging the two, and setting up a KVM to be able to work with different systems [linux only to be frank] + use openVZ to setup separate machines for my development needs. I would simply go with that, but reading here and there about the performance impact full virtualization has over containers and looking at the specs of my server makes me think twice about it. I don't want to loose too much performance, especially because of the nature of my apps [few JVMs running at the same time]. It will be my first time with virtualization, apart from using desktop virtualbox/vmserver. Although I am a fast learner I don't want to mess with the main system so much that it will break the production apps or make them crawl. Although they are more or less internal apps and they don't produce much load, they need to be stable. I've read, that KVM host is a normal linux installation and it allows to run normal processes on it. If that is so, does it allow to run openVZ as well? I mean... can I have KVM and OpenVZ running on the same system/kernel? Or do I have to setup another system to run OpenVZ containers? How much performance impact can this have for me? Will my hardware suffice? oh and one more thing... unfortunately I'm quite limited with the funds... I'm looking for a free solution only :/

    Read the article

  • Server to server replication and CPU and 32k\ corrupt doc

    - by nick wall
    Summary: if database contains a doc with 32K issue or corrupt, on server to server replication it causes marked increase in CPU in nserver.exe task, which effectively causes our server(s) to slow right down. We have a 5 server cluster (1 "hub" and 4 HTTP servers accessed via reverse proxy and SSO for load balancing and redundancy). All are physically located next to each other on network, they don't have dedicated network\ ports for cluster or replication. I realise IBM recommendation is dedicated port for cluster. Cluster queues are in tolerance and under heavy application user load, i.e. the maximum number of documents are being created, edited, deleted, the replication times between servers are negligible. Normally, all is well. Of the servers in the cluster, 1 is considered the "hub", and imitates a PUSH-PULL replication with it's cluster mates every 60mins, so that the replication load is taken by the hub and not cluster mates. The problem we have: every now and then we get a slow replication time from the hub to a cluster mate, sometimes up to 30mins. This maxes out the nserver.exe task on the "cluster mate" which causes it to respond to http requests very slowly. In the past, we have found that if a corrupt document is in the DB, it can have this affect, but on those occasions, the server log will show the corrupt doc noteId, we run fixup, all well. But we are not now seeing any record of corrupt docs. What we have noticed is if a doc with the 32K issue is present, the same thing can happen. Our only solution in that case is to run a : fixup mydb.nsf -V, which shows it is purging a 32K doc. Luckily we run a reverse proxy, so we can shut HTTP servers down without users noticing, but users do notice when a server has the problem! Has anyone else seen this occur? I have set up DDM event handlers for many of the replication events. I have set the replication time out limit to 5 mins (the max we usually see under full user load is 0.1min), to prevent it rep'ing for 30mins as before. This ia a temporary work around. Does anyone know of a DDM event to trap the 32K issue? we could at least then send alert. Regarding 32K issue: this prob needs another thread, but we are finding this relatively hard to find the source of the issue as the 32K event is fairly rare. Our app is fairly complex, interacting with various other external web services, with 2 way data transfer. But if we do encounter a 32K doc, we can't look at field properties, so we can't work out which field has issue which would give us a clue as to which process is culprit. As above, we run a fixup -V. Any help\ comments on this would be gratefully received.

    Read the article

  • Network unreachable on Ubuntu guest after trying to set up a host only network on Virtualbox

    - by gkb0986
    I have a Mac OS X host and a bunch of guests including Ubuntu and Arch Linux. I was trying to set up a host-only network at eth1 to let me ssh into the system. But now eth0 isn't working properly either. Ubuntu can no longer connect to remote hosts or browse the internet. It tells me that the network is unreachable. What's gone wrong here? I've included some diagnostics below. $ifconfig lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:10968 errors:0 dropped:0 overruns:0 frame:0 TX packets:10968 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:897264 (897.2 KB) TX bytes:897264 (897.2 KB) Other diagnostic commands and the output: $sudo lspci -n 00:00.0 0600: 8086:1237 (rev 02) 00:01.0 0601: 8086:7000 00:01.1 0101: 8086:7111 (rev 01) 00:02.0 0300: 80ee:beef 00:03.0 0200: 8086:100e (rev 02) 00:04.0 0880: 80ee:cafe 00:05.0 0401: 8086:2415 (rev 01) 00:06.0 0C03: 106B:003F 00:07.0 0680: 8086:7113 (REV 08) 00:0D.0 0106: 8086:2829 (REV 02) $sudo lshw -c network *-network DISABLED description: Ethernet interface product: 82540EM Gigabit Ethernet Controller vendor: Intel Corporation physical id: 3 bus info: pci@0000:00:03.0 logical name: eth0 version: 02 serial: 08:00:27:7d:22:df size: 1Gbit/s capacity: 1Gbit/s width: 32 bits clock: 66MHz capabilities: pm pcix bus_master cap_list ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=e1000 driverversion=7.3.21-k8-NAPI duplex=full firmware=N/A latency=64 link=no mingnt=255 multicast=yes port=twisted pair speed=1Gbit/s resources: irq:19 memory:f0000000-f001ffff ioport:d010(size=8) $lsmod Module Size Used by nls_utf8 12557 1 isofs 40257 1 vboxsf 43743 2 vesafb 13844 1 snd_intel8x0 38570 2 snd_ac97_codec 134869 1 snd_intel8x0 ac97_bus 12730 1 snd_ac97_codec snd_pcm 97275 2 snd_intel8x0,snd_ac97_codec snd_seq_midi 13324 0 snd_rawmidi 30748 1 snd_seq_midi snd_seq_midi_event 14899 1 snd_seq_midi rfcomm 47604 0 snd_seq 61929 2 snd_seq_midi,snd_seq_midi_event bnep 18281 2 bluetooth 180113 10 rfcomm,bnep ppdev 17113 0 psmouse 97519 0 snd_timer 29990 2 snd_pcm,snd_seq joydev 17693 0 snd_seq_device 14540 3 snd_seq_midi,snd_rawmidi,snd_seq vboxvideo 12622 1 serio_raw 13211 0 snd 79041 11 snd_intel8x0,snd_ac97_codec,snd_pcm,snd_rawmidi,snd_seq,snd_timer,snd_seq_device soundcore 15091 1 snd vboxguest 235498 7 vboxsf parport_pc 32866 0 drm 241971 2 vboxvideo i2c_piix4 13301 0 snd_page_alloc 18529 2 snd_intel8x0,snd_pcm mac_hid 13253 0 lp 17799 0 parport 46562 3 ppdev,parport_pc,lp usbhid 47238 0 hid 99636 1 usbhid e1000 108589 0

    Read the article

  • Can't install .NET framework 4.0 on Windows XP professional version 2002 SP3 (OS bug?)

    - by that guy
    .NET framework 4.0 install fails on Windows XP professional version 2002 SP3: I tried to run setup using "run as..." to make sure the admin rights are used ("protect my computer..." tick was deselected of course). I tried everything: installing using online/offline setup, windows update. install goes a little and then "rolls back" and says: Installation did not succeed .NET Framework 4 has not been installed because: Fatal error during installation. for more information about this problem, see the log file. the full log: http://pastebay.net/1433771 Any ideas? EDIT1: I have found this in the log: "BlockIf: You must install the 32-bit Windows Imaging Component (WIC) before you run Setup. Please visit the Microsoft Download Center to install WIC, and then rerun Setup...." So I found it, and launched "wic_x86_enu.exe" - but it said: WIC Setup error Newer version of update is already on the system. I have already installed: .NET framewrok 2.0 SP2 .NET framewrok 3.0 SP2 .NET framewrok 3.5 SP1 but I need 4.0 . EDIT2: another attempt and it's log. (this time better copy of log file): http://pastebin.com/gmGfbM9a (copy to notepad and save as .htm and open with internet browser). I have tried all the solutions I could find - and nothing helped. I have found something weird: when I formatted the hard drive and installed windows xp again - the .NET framework 4.0 installed ok, but when I plugged my 100Mbit internet cable - the operating system kind off "locked itself" and the bug returned - I could no longer install .NET framework 4.0 again. There was no reason for that to happen, for example I have windows server 2003 in local network, but I don't have active directory enabled on it or anything like that - the server just has some folders shared and thats all (all server's "features" are default). I had the second pc with the same problem - with XP on it too. This seems like the bug of Operating System to me. I couldn't find what was causing the problem. After many days I gave up: backuped everything, formatted HDD and installed Windows 7 professional 64bit. .NET framework 4.0 installed with no problem on it.

    Read the article

  • Reviews Cheyney Group Marketing: What accounting softwares are available in the market for small businesses?

    - by user224313
    Accounting is the language of business, and good accounting software can save you hundreds of hours at the business equivalent of Berlitz. There's no substitute for an accounting pro who knows the ins and outs of tax law, but today's desktop packages can help you with everything from routine bookkeeping to payroll, taxes, and planning. Each package also produces files that you can hand off to an accountant as needed. Small-business managers have more accounting software options than ever, including subscription Web-based options that don't require their users to install or update software. Many businesses, however--including those that need to track large inventories or client databases, and those that prefer not to entrust their data to the cloud--may be happier with a desktop tool. We looked at three general-purpose, small-business accounting packages: Acclivity AccountEdgePro 2012 (both the product and the company were previously called MYOB), Intuit QuickBooks Premier 2012, and Sage's Sage 50 Complete 2013 (the successor to Peachtree Complete). All three packages offer a solid array of tools for tracking income and expenses, invoicing, managing payroll, and creating reports. These full-featured and highly mature programs don't come cheap. Acclivity AccountEdge Pro, at $299, is the least expensive; and prices climb if you opt to use common time-saving add-ons such as payroll services, or if you add licenses for multiple user accounts. All three are solid on the basics, but they have distinct differences in style and focus. The more you know about your accounting requirements, the more closely you'll want to look at the software you're thinking of buying. Sage 50 Complete should appeal most to people who understand the fine points of accounting and can use the product's many customization features (especially for businesses that manage inventory). QuickBooks works hard to appeal to newbies who need only the basics and might be intimidated by the level of detail and technical language exposed in the other two packages. At the same time, it also has a slew of third-party add-ons that meet specific needs and greatly expand its capabilities. AccountEdge Pro balances accessibility with a strong feature set at an affordable price. It's especially suitable for businesses that need to provide simultaneous access to multiple users.

    Read the article

  • Can't recover hard drive

    - by BreezyChick89
    My drive got corrupt after a thunderstorm. It used to be 1 partition of 2.5tb but now it shows 2 partitions. It's weird because 300gig free space is about how much it had before corrupting, but it was part of the first partition. I tried $ sudo resize2fs -f /dev/sdb1 Resizing the filesystem on /dev/sdb1 to 536870911 (4k) blocks. resize2fs: Can't read an block bitmap while trying to resize /dev/sdb1 Please run 'e2fsck -fy /dev/sdb1' to fix the filesystem after the aborted resize operation. sudo e2fsck -f /dev/sdb1 e2fsck 1.42 (29-Nov-2011) The filesystem size (according to the superblock) is 610471680 blocks The physical size of the device is 536870911 blocks Either the superblock or the partition table is likely to be corrupt! Abort? n .... Error reading block 537395215 (Invalid argument) while reading inode and block bitmaps. Ignore error<y>? yes Force rewrite<y>? yes Error writing block 537395215 (Invalid argument) while reading inode and block bitmaps. Ignore error<y>? yes ... A lot of these. I can't use e2fsck -y because the first question aborts if I say "y". If I put a weight on the 'y' key it fails because none of the errors were really fixed. I asked this question before and tried using gparted but gparted fails because the first thing it does is: e2fsck -f -y -v /dev/sdb1 giving the same error. The disk status says healthy. There are no bad blocks. This is very frustrating because I can see the data in testdisk and it looks like it's all there. I already bought another 2.5tb drive and made a clone using dd. The next step if I can't fix this is to wipe that drive and just move the data with testdisk, but it seems certain folders will copy infinitely until the drive is full because of symlinks or errors so it's also a difficult option. sudo fdisk -l Disk /dev/sdb: 2500.5 GB, 2500495958016 bytes 255 heads, 63 sectors/track, 304001 cylinders, total 4883781168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0x0005da5e Device Boot Start End Blocks Id System /dev/sdb1 * 2048 4294969342 2147483647+ 83 Linux sudo badblocks -b 4096 -n -o badfile /dev/sdb 610471680 536870911 badfile is empty I also tried changing the superblock with "fsck -b" but all of them are the same.

    Read the article

  • domain/IN: has no NS records

    - by thejartender
    I have set up a home web server using Ubuntu 12.10 and I can safely say that it works with regards to router forwarding and ports being found. I know this, because switched my hosting provider's VPS SOA record to use my ISP IP with an 'A' value and had my website running from home. This verified that my server was configured correctly so I started what I believe to be the final step in making my old desktop into a full DNS server. I found this tutorial that got me started My LAN network consists of the following: My router with a gateway of 10.0.0.zzz My server with an IP of 10.0.0.xxx A laptop with an IP of 10.0.0.yyy Step 1: I installed bind via sudo apt-get install bind9 Step2: I configured /etc/bind/named.conf.local with: zone "sognwebdesign.no" { type master; file "/etc/bind/zones/sognwebdesign.no.db"; }; zone "0.0.10.in-addr.arpa" { type master; file "/etc/bind/zones/rev.0.0.10.in-addr.arpa"; }; Step3: Updated /etc/bind/named.conf.options with two ISP DNS addresses Step 4: Updated /etc/resolv.confwith: nameserver 10.0.0.xxx search lan search sognwebdesign.no Step5: created a ``/etc/bind/zones directory Step6: Created /etc/bind/zones/sognwebdesign.no.dbwith: $TTL 3D @ IN SOA ns.sognwebdesign.no. admin.sognwebdesign.no. ( 2007062001 28800 3600 604800 38400 ); sognwebdesign.no. IN NS ns1.sognwebdesign.no. sognwebdesign.no. IN NS ns2.sognwebdesign.no. sognwebdesign.no. IN NS ns3.sognwebdesign.no. NS1 IN A 10.0.0.1 NS2 IN A 10.0.0.2 NS3 IN A 10.0.0.3 www IN A 10.0.0.4 yuccalaptop IN A 10.0.0.19 gw IN A 10.0.0.138 TXT "Network Gateway" Step 7: created/etc/bind/zones/rev.0.0.10.in-addr.arpawith: $TTL 3D @ IN SOA ns.sognwebdesign.no. admin.sognwebdesign.no. ( 2007062001 28800 604800 604800 86400 ); zzz IN PTR gw.sognwebdesign.no. 1 IN PTR ns1.sognwebdesign.no. 2 IN PTR ns2.sognwebdesign.no. 3 IN PTR ns3.sognwebdesign.no. yyy IN PTR yuccalaptop.sognwebdesign.no. I then restart bind and dig-x sognwebdesign.no and it works Lastly I perform named-checkzoneon each of my zone files, but me reverse zone fail fails with: sognwedesign.no/IN: has no NS records Can anyone explain what I am doing wrong here or assist me in getting this configured correctly?

    Read the article

  • Need help troubleshooting highly variable ping times

    - by Elliot.Bradshaw
    I'm at work using Citrix (think Remote Desktop) to connect to client sites. With my job I have to write a fair bit of code while I'm connected remotely via Citrix, so the latency of my internet connection is important. If I'm getting ping times above 250ms, then it becomes almost impossible to scroll, click or type with accuracy. Recently my Comcast business internet has been exhibiting highly variable ping times. If I ping google.com, I'll get pings that range from 9ms all the way up to 1300ms. The problem seems to be at its worst during the hours of 1PM to 4:30PM. Outside of those hours and the variance in pings settles down, mostly between 9ms and 50ms. The signal to noise ratio and upstream power are both fine on my modem--the values are here: http://pastebin.com/D4hWGPXf I ran a trace route from my computer to google.com (the results of which are here: http://pastebin.com/GcdjYvMh) and did another test ping to the IP of the first hop outside of our local network (73.98.44.1)--the variance in ping times existed in exactly the same manner as if I were pinging Google. Connecting directly to the cable modem by CAT5 makes no difference. Here is a screenshot demonstrating the variance of the ping times: http://postimage.org/image/haocdeauv/full/ -- as you can see it can get pretty bad. Three Comcast techs have been out (two of them were here when the problem wasn't happening) and they as well as the regional tier 2 Comcast support were unable to diagnose the problem. I now have a ticket open with tier 3 support, but have yet to hear back from them. Does anyone know what could cause these sorts of problems or have any idea from the traceroute above where it could be originating? The regional tier 2 guy tried to tell me that what I'm seeing is normal--are highly variable ping times like that ever acceptable? Anything I should ask Comcast to do or look at to get this problem fixed? Any tips/advice much appreciated! Edit: This is Comcast cable internet at a small start-up, we've ruled out congestion in our private LAN as a cause (i.e., no one's watching YouTube when the pings become variable). Update: Tier 3 Comcast support advised swapping out the modem, a tech came here today and did that--same problem persists.

    Read the article

  • Cannot get to configure Kerberos for Reporting Services

    - by Ucodia
    Context I am trying to configure Kerberos in the domain for double-hop authentication. So here are the machines and their respective roles: client01: Windows 7 as client dc01: Windows Server 2008 R2 as domain controller and dns server01: Windows Server 2008 R2 as reporting server (native mode) server02: Windows Server 2008 R2 as SQL Server database engine I want my client01 to connect to server01 and configure a data source that is located on server02 using Intergrated Security. So as NTLM cannot push credentials that far, I need to setup Kerberos to enable double-hop authentication. The reporting service is runned by the Network Service service account and is configured only with the RSWindowsNegotiate options for authentication. Issue I cannot get to pass my client01 credential to server02 when configuring the data source on server01. Therefore I get the error: Login failed for user 'NT AUTHORITY\ANONYMOUS LOGON'. So I went on dc01 and delegated full trust for any service to server01 but it not fixed the problem. I want to notice that I did not configured any SPNs for server01 because Reporting Service is runned by Network Service and from what I read on the Internet, when Reporting Services is going up with Network Service, SPNs are automatically registered. My problem is that even if that I want to configure SPNs manually, I do not know where I have to set them up. On dc01 or on server01? So I went a bit further on the issue and tried to trace this problem. From my understanding of Kerberos, this is what should happen on the network when I try to connect the data source: client01 ---- AS_REQ ---> dc01 <--- AS_REP ---- client01 ---- TGS_REQ ---> dc01 <--- TGS_REP ---- client01 ---- AP_REQ ---> server01 <--- AP_REP ---- server01 ---- TGS_REQ ---> dc01 <--- TGS_REP ---- server01 ---- AP_REQ ---> server02 <--- AP_REP ---- So captured my local network with Wireshark, but whenever I try to configure my data source from client01 on server01 to pass my credentials to server02, my client never sends a AS_REQ or TGS_REQ to the KDC on dc01. Questions So does anyone can tell me if I should configure the SPNs and on which machine does it have to be configured? Also why client01 never request for a TGT or a TGS to my KDC. Do you think there is something going wrong with the DC role of dc01?

    Read the article

  • Glassfish JSF/EAR Apache 2.2 proxy_ajp_mod Referred Content Missing (images/links/etc)

    - by BillR
    Full disclosure: Since this seems to be more of a configuration issue, I deleted this from Stack (where it wasn't getting any response) and reposted here. The problem is how to change the requestContextPath served up by Glassfish behind mod_proxy_ajp. The site/app runs fine if connecting directly to Glassfish port 8080 which is ultimately not what I want to do. So I need help with configuration for my servers and jsf deployment. I can see the issue but don't know how to resolve it. It has to do with the requestContextPath. Simply put, Apache directs to http://mysite.com/welcome.xhtml which is correct and what I want, but the page is minus the images and styles. The issue is Glassfish itself is still pointing to http://mysite.com/myapp/*. So all links it serves in the app/site still refer via the requestContextPath. That is the /myapp/* part of http://mysite.com/myapp/welcome.xhtml. When I look in the page source, images which are referred to with relative links still point to the requestContextPath (that is, /myapp/). This is fixable but a real pain. However with page links I can't set the relative path. If I hover over the contact page link I see http://mysite.com/myapp/contact.xhtml, and if I click it, I get 404. You can see the /myapp/ context path in the page source as well. If I type in the URL http://mysite.com/contact.xhtml I get the page minus its referred links (requestContextPath). On Apache ProxyPass / ajp://littlewalterserver:8009/myapp-web/ ProxyPassReverse / ajp://littlewalterserver:8009/myapp_Project-web On Glassfish asadmin create-network-listener --listenerport 8009 --protocol http-listener-1 --jkenabled true jk-connector I have tried going in to Glassfish and setting the web app as the default web app. I have changed the / in glassfish-web.xml (and checked to make sure it was the same in the EAR file). How can I get Glassfish to not include the /myapp/ context in the URLs? This has to be easy if you know how, but I don't know how, can someone help out here? Thanks.

    Read the article

< Previous Page | 518 519 520 521 522 523 524 525 526 527 528 529  | Next Page >