Search Results

Search found 11960 results on 479 pages for 'auto implemented propert'.

Page 408/479 | < Previous Page | 404 405 406 407 408 409 410 411 412 413 414 415  | Next Page >

  • Why is mount -a not mounting fuse drive properly when executed remotely (via Fabric)?

    - by Jim D
    This is a weird bug and I'm not sure where it's coming from. Here's a quick run down of what I'm doing. I'm trying to mount a FUSE drive to an Amazon EC2 instance running Ubuntu 10.10 using s3fs (FUSE over Amazon). s3fs is compiled from source according to the instructions etc. I've also added an entry to /etc/fstab so that the drive mounts on boot. Here's what /etc/fstab looks like: # /etc/fstab: static file system information. # <file system> <mount point> <type> <options> <dump> <pass> proc /proc proc nodev,noexec,nosuid 0 0 LABEL=uec-rootfs / ext4 defaults 0 0 /dev/sda2 /mnt auto defaults,nobootwait,comment=cloudconfig 0 2 /dev/sda3 none swap sw,comment=cloudconfig 0 0 s3fs#mybucket /mnt/s3/mybucket fuse default_acl=public-read,use_cache=/tmp,allow_other 0 0 So the good news is that this works fine. On reboot the connection mounts correctly. I can also do: $ sudo umount /mnt/s3/mybucket $ sudo mount -a $ mountpoint /mnt/s3/mybucket /mnt/s3/mybucket is a mountpoint Great, right? Well here's the problem. I'm using Fabric to automate the process of building and managing this instance. I noticed I was getting this error message when using Fabric to build s3fs and set up the mount process: mountpoint: /mnt/s3/mybucket: Transport endpoint is not connected I isolated it down the the problem and built a fabric task that reproduces the problem: def remount_s3fs(): sudo("mount -a") Which does: [ec2-xx-xx-xx-xx.compute-1.amazonaws.com] Executing task 'remount_s3fs' [ec2-xx-xx-xx-xx.compute-1.amazonaws.com] sudo: mount -a [And yes, I was sure to unmount it before running this task.] When I check the mount using mountpoint I get: $ mountpoint /mnt/s3/mybucket mountpoint: /mnt/s3/mybucket: Transport endpoint is not connected Done. But if I run sudo mount -a at the command line, it works. Hrm. Here is that fab task output again, this time in full debug mode: [ec2-xx-xx-xx-xx.compute-1.amazonaws.com] Executing task 'remount_s3fs' [ec2-xx-xx-xx-xx.compute-1.amazonaws.com] sudo: sudo -S -p 'sudo password:' /bin/bash -l -c "mount -a" Again, I get that transport endpoint not connected error. I've also tried copying and pasting the exact command run into my ssh session (i.e. sudo -S -p 'sudo password:' /bin/bash -l -c "mount -a") and it works fine. So...that's my problem. Any ideas?

    Read the article

  • Windows 2003 Storage Server Hanging on Large File Transfers

    - by user25272
    In one of our offices we have a Dell PowerVault 745N NAS device which acts as the main file server. Its running 32bit Windows 2003 Storage Server SP2 with 3GB RAM. The server holds around 60 users HOME folders, which are mapped via AD. The office clients are a mix of XP SP3, Vista and Windows 7. Occasionally the server will completely hang when transferring large files. When the hang happens the console becomes unresponsive with only the mouse active and blank wallpaper. Sometimes stopping the copy frees the server, sometimes not. The hanging can last around 20 minutes. During this time other servers also become unresponsive with blank wallpaper at the console. If you do manage to get onto another server the taskbar and run commands are unresponsive. This also transcends to the client computers sometimes with explorer crashing. I'm guessing this is due to the HOME folder mapping. Eventually the NAS server with free up and everything will be back to normal. The server is configured as follows: PERC 4/DC DATA 2 - 12 SCSI HDD - RAID5 SHADOWCOPY 2 SCSI HDD - RAID1 CERC SATA DATA 11 4 SATA HDD - RAID5 OS 4 SATA HDD - RAID5 All the drivers and firmware is up to date. I've been through all the diagnostics with Dell and the hardware has come up clean including full HDD tests on the arrays. The server has NOD32 installed as the AV, but the hanging happens when it is uninstalled. There are no errors in the event log when this happens and we don't have any errors logged on any of our ProCurve switches. DNS is fine on the domain and AD from what I can tell is running happily. There are no DFS or NFS shares setup either. All the shares are standard Windows. I've unchecked the allow the computer to turn off this device to save power box under Power Management on the NIC. "Set Link Speed and Duplex to Auto-negotiate 1000 " Increased Receive Descriptors buffer from 256 to 352 (reserves more CPU resource for handling data) I've run network traces using network monitor and have found the following: 417 8.078125 {SMB:192, NbtSS:25, TCP:24, IPv4:23} 192.168.2.244 192.168.5.35 SMB SMB:R; Nt Create Andx - NT Status: System - Error, Code = (52) STATUS_OBJECT_NAME_NOT_FOUND I've tried different cabling; NICs and switch ports all with the same result. Transferring files from other servers on the domain is fine. All I haven't done is run CHKDSK on the drives to look for any file system errors. On the Vista clients I have also run netsh interface tcp set global autotuning=disabled with no result. Could it be that the server has a faulty drive or that the I/O is too much for it to handle? Any ideas why would the hang cause issues with the other servers on the LAN? Many Thanks.

    Read the article

  • Deploying website content via Subversion

    - by Johann
    we have recently set up a new development infrastructure and process for one of our clients. This involves the strict use of subversion as a central source code repository. The svn repositories contains a seperate branch for code on the live system (/branches/live/). The repositories are use for PHP content (mainly Wordpress Blogs), but in future they may hold other asp code as well. Bonus points for a solutions which more or less in the same way with ASP code on Windows Server 2008 R2. We have two servers: one staging system and one live system. The staging system is updated regularly with the code of the trunk. The live system is update manually. Each webroot on the servers are working copy of either the trunk (staging system) or the live branch (live system). The current workflow is: Developing on the dev's box - commit into the trunk - auto-deploy on staging system - testing on the staging system - merging into /branches/live/ - manual deployment on live system. This works for one-way changes very well, however we have some troubles on every wordpress (or plugin) update: The WP update process removes the directories and unpack the archive of the new version. This removes the svn admin area as well, which produces a lot of errors. We could switch to SVN 1.7 with a single, global admin area, but this would only solve on part of the problem. Finally, we have done the update via the WP Gui, restored the svn admin area, added/removed the files and committed the changes to the trunk. After testing, we had to do basically the same thing on the live server (except the commit, we just reverted the changes and merged the new files from the staging system to the live system). I'm currently thinking of the following: The htdocs of each website is a svn export Each website has a svn working copy beside the htdocs directory a script which "replays" the changes in the wc from htdocs after an update in WP (rsync'ing the changed files to the working copy, rsync'ing new files and svn add them and finally svn delete the deleted files). The script would have to exclude some files (like wp-config.php, uploads/temp directories, etc.). Are there better ways to do this? Unfortunaly, a complete CI server is out of scope due to time and budget limitations.

    Read the article

  • Hot-swap drive got new name, can I change it on-the-fly?

    - by T.J. Crowder
    One of the HDDs in my server's RAID config failed, so I took it out of the array and had the data center hot-swap it. They've done that, but now the new drive is /dev/sdc rather than /dev/sda. I suspect — correct me if I'm wrong — that if I reboot the server, it will be /dev/sda again, so I'm hesitant to add it back to the array as /dev/sdc because I don't want to lay a trap for myself to fall into on the next reboot. I'd just as soon not reboot the server if I don't need to (if I do need to, well, too bad for me). Is there a way I can change the device name from /dev/sdc to /dev/sda without rebooting? This is on Ubuntu 10.04 LTS. It's an md array ("Linux Software RAID"), where currently one of the devices (there are a couple of them) looks like this ("degraded" because I've removed the old /dev/sda from it): # mdadm --detail /dev/md0 /dev/md0: Version : 00.90.03 Creation Time : Sun Oct 11 21:07:54 2009 Raid Level : raid1 Array Size : 97536 (95.27 MiB 99.88 MB) Used Dev Size : 97536 (95.27 MiB 99.88 MB) Raid Devices : 2 Total Devices : 1 Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Thu Jun 30 09:31:16 2011 State : clean, degraded Active Devices : 1 Working Devices : 1 Failed Devices : 0 Spare Devices : 0 UUID : 496be7a5:ab9177ed:7792c71e:7dc17aa4 Events : 0.112 Number Major Minor RaidDevice State 0 8 17 0 active sync /dev/sdb1 1 0 0 1 removed Thanks, Update: Reading through the kernel md documentation, I suspect that if the name changes on reboot, it won't matter. (Good design, that.) Here's why: Boot time autodetection of RAID arrays When md is compiled into the kernel (not as module), partitions of type 0xfd are scanned and automatically assembled into RAID arrays. This autodetection may be suppressed with the kernel parameter "raid=noautodetect". As of kernel 2.6.9, only drives with a type 0 superblock can be autodetected and run at boot time. The kernel parameter "raid=partitionable" (or "raid=part") means that all auto-detected arrays are assembled as partitionable. I do have md compiled into the kernel, so I'm rebuilding the array now and will do the reboot to see what happens. Even if it works, the above doesn't answer the question I actually asked, so unless someone comes along and answers that question in the meantime (I'd be interested, even if it's not necessary for what I'm doing this very moment), I'll just delete the question to keep noise down.

    Read the article

  • Separate zone exceptions for each view in BIND

    - by Stefan M
    Problem: Separate zones by query source network and return different records for LAN clients compared to WAN clients. I've implemented this at home on a small alix router with Bind 9.4. One view called "lan" and one view called "wan". The "lan" view had just the root.hints file and one zone. The "wan" view had many other zones, including a copy of the one zone from the "lan" view, but with different records. Querying domain1.tld from the LAN would give me local records. Querying domain1.tld from the WAN would give me external records. Querying domain2.tld from the LAN would give me the same records as from the WAN as it only existed in the WAN view. Now I'm trying to re-implement this on a larger scale and suddenly my view is unable to query anything outside itself. This is natural according to the bind-users list and they suggest I copy all my views into my LAN view. I'm hoping someone here has a better solution because that means I'll have to copy, and maintain, thousands of zone files in multiple views. This is unfeasible. My configuration at home resembles this. acl lanClients { 192.168.22.0/24; 127.0.0.1; }; view "intranet" { match-clients { lanClients; }; recursion yes; notify no; // Standard zones // zone "." { type hint; file "etc/root.hint"; }; zone "domain1.tld" { type master; file "intranet/domain1.tld"; }; }; view "internet" { match-clients { !localnets; any; }; recursion no; allow-transfer { slaveDNS; }; include "master.zones"; }; Requests from the LAN for domain1.tld give local records, requests from the WAN give remote records. This works fine both at home and in my new Bind 9.7 on a larger scale. The difference is that at home I have somehow managed to make my LAN get remote records from domains in master.zones, without specifying those zones as duplicates in the "intranet" view. Trying this on a larger scale with Bind 9.7 I get no results at all except for the zones specified in the view. What am I missing? I've tried the same configuration for Bind 9.7.

    Read the article

  • Suspicious process running under user named

    - by Amit
    I get a lot of emails reporting this and I want this issue to auto correct itself. These process are run by my server and are a result of updates, session deletion and other legitimate session handling reported as false positives. Here's a sample report: Time: Sat Oct 20 00:00:03 2012 -0400 PID: 20077 Account: named Uptime: 326117 seconds Executable: /usr/sbin/nsd\00507d27e9\0053\00\00\00\00\00 (deleted) The file system shows this process is running an executable file that has been deleted. This typically happens when the original file has been replaced by a new file when the application is updated. To prevent this being reported again, restart the process that runs this excecutable file. See csf.conf and the PT_DELETED text for more information about the security implications of processes running deleted executable files. Command Line (often faked in exploits): /usr/sbin/nsd -c /etc/nsd/nsd.conf Network connections by the process (if any): udp: xx.xx.xxx.xx:53 -> 0.0.0.0:0 udp: 127.0.0.1:53 -> 0.0.0.0:0 udp: xx.xx.xxx.xx:53 -> 0.0.0.0:0 tcp: xx.xx.xxx.xx:53 -> 0.0.0.0:0 tcp: 127.0.0.1:53 -> 0.0.0.0:0 tcp: xx.xx.xxx.xx:53 -> 0.0.0.0:0 Files open by the process (if any): /dev/null /dev/null /dev/null Memory maps by the process (if any): 0045e000-00479000 r-xp 00000000 fd:00 2582025 /lib/ld-2.5.so 00479000-0047a000 r--p 0001a000 fd:00 2582025 /lib/ld-2.5.so 0047a000-0047b000 rw-p 0001b000 fd:00 2582025 /lib/ld-2.5.so 0047d000-005d5000 r-xp 00000000 fd:00 2582073 /lib/i686/nosegneg/libc-2.5.so 005d5000-005d7000 r--p 00157000 fd:00 2582073 /lib/i686/nosegneg/libc-2.5.so 005d7000-005d8000 rw-p 00159000 fd:00 2582073 /lib/i686/nosegneg/libc-2.5.so 005d8000-005db000 rw-p 005d8000 00:00 0 005dd000-005e0000 r-xp 00000000 fd:00 2582087 /lib/libdl-2.5.so 005e0000-005e1000 r--p 00002000 fd:00 2582087 /lib/libdl-2.5.so 005e1000-005e2000 rw-p 00003000 fd:00 2582087 /lib/libdl-2.5.so 0062b000-0063d000 r-xp 00000000 fd:00 2582079 /lib/libz.so.1.2.3 0063d000-0063e000 rw-p 00011000 fd:00 2582079 /lib/libz.so.1.2.3 00855000-0085f000 r-xp 00000000 fd:00 2582022 /lib/libnss_files-2.5.so 0085f000-00860000 r--p 00009000 fd:00 2582022 /lib/libnss_files-2.5.so 00860000-00861000 rw-p 0000a000 fd:00 2582022 /lib/libnss_files-2.5.so 00ac0000-00bea000 r-xp 00000000 fd:00 2582166 /lib/libcrypto.so.0.9.8e 00bea000-00bfe000 rw-p 00129000 fd:00 2582166 /lib/libcrypto.so.0.9.8e 00bfe000-00c01000 rw-p 00bfe000 00:00 0 00e68000-00e69000 r-xp 00e68000 00:00 0 [vdso] 08048000-08074000 r-xp 00000000 fd:00 927261 /usr/sbin/nsd 08074000-08079000 rw-p 0002b000 fd:00 927261 /usr/sbin/nsd 08079000-0808c000 rw-p 08079000 00:00 0 08a20000-08a67000 rw-p 08a20000 00:00 0 b7f8d000-b7ff2000 rw-p b7f8d000 00:00 0 b7ffd000-b7ffe000 rw-p b7ffd000 00:00 0 bfa6d000-bfa91000 rw-p bffda000 00:00 0 [stack] Would /etc/nsd/restart or kill -1 20077 solve the problem?

    Read the article

  • Should use EXT4 or XFS to be able to 'sync'/backup to S3?

    - by Rafa
    It's my first message here, so bear with me... (I have already checked quite a few of the "Related Questions" suggested by the editor) Here's the setup, a brand new dedicated server (8GB RAM, some 140+ GB disk, Raid 1 via HW controller, 15000 RPM) it's a production web server (with MySQL in it, too, not just serving web requests); not a personal desktop computer or similar. Ubuntu Server 64bit 10.04 LTS We have an Amazon EC2+EBS setup with the EBS volume formatted as XFS for easily taking snapshots to S3, via AWS' console. We are now migrating to the dedicated server and I want to be able to backup our data to Amazon's S3. The main reason being the possibility of using the latest snapshot from an EC2 instance in case of hardware failure on the dedicated server. There are two approaches I am thinking of: do a "simple" file-based backup with rsync, dumping the database' and other files, and uploading to amazon via S3 API commands, or to an EC2 instance, or something. do a file-system "freeze" (using XFS) with the usual ebs/ec2 snapshot tool to take part of the file system, take a snapshot, and upload it to Amazon. Here's my question (or series of questions): Can I safely use XFS for the whole system as the main and only format on the dedicated server? If not, is it safe to use EXT4? Or should I use something else? would then be possible to make snapshots of the system to upload to Amazon? Is it possible/feasible/practical to do what I want to do, anyway? any recommendations? When searching around for S3/EBS/XFS, anything relevant to my problem is usually focused on taking snapshots of a XFS system that is already an EBS volume. My intention is to do it in a "real"/metal dedicated server. Update: I just saw this on Wikipedia: XFS does not provide direct support for snapshots, as it expects the snapshot process to be implemented by the volume manager. I had always assumed that I could choose 2 ways of doing snapshots: via LVM or via XFS (without LVM). After reading this, I realize these 2 options are more like it: With XFS: 1) do xfs_freeze; 2) copy the frozen files via, eg, rsync; 3) unfreeze xfs With LVM and XFS: 1) do xfs_freeze; 2) make a binary copy of the frozen fs via lvcreate and related commands; 3) unfreeze xfs; 4) somehow backup the LVM snapshot. Thanks a lot in advance, Let me know if I need to clarify something.

    Read the article

  • Looking for app to work fluidly with CSV data in graph form

    - by Aszurom
    It often occurs to me that if I had a good tool for viewing CSV data in graphical format, and comparing two sets of numbers to each other, I could do a great deal of meaningful trend watching and data interpretation. For example, perfmon can output quite a lot of data about a server into a CSV file, but there's no good way to view it. A lot of scripts could/have been written that would populate CSV files. I could write these all day long. My problem is that I need a great viewer. I've seen quite a few things that will take a CSV file and after a lot of tweaking and user adjustment produce a static gif/png image. A static image doesn't do me a lot of good, because I have to look at it, then re-calibrate the parameters of the program, regenerate the image, repeat. That sucks. I could do this in Excel. Ideally, I would want a FLUID graph viewer. On the fly, I can adjust how much of my timeline I'm viewing. I could adjust the scaling so that one big spike doesn't make 99.9% of the data an unreadable line across the bottom of the X axis. Stuff like that. I should be able to say "show me CSV column 3 and column 5 as graphs. Show me the data scaled for 20 or 150 entries, and let me slide that window up and down the column of data. Auto scale to fit 95% of data within the Y axis and let crazy spikes go off the screen." Maybe I'm terribly spoiled by how you can drag, zoom, and slide data around on my iPad. I want to be able to view a spreadsheet of data with that fluidity and not have to guess at what sort of static snapshot I want to create from it. I don't want to have to make a study of how to tweak some data plotting program to let me import my file and do what I could just do in Excel. I want to scale, zoom, and transform my graph on the fly and then export a snapshot of it once I have it the way I want it. Is there anything out there that fills this need? I'll take linux, osx, win32 or even iOS suggestions.

    Read the article

  • Plesk Postfix Mail Server 9.5.4 very heavy load, 1000s of processes

    - by Eugene van der Merwe
    Our Plesk Linux Ubuntu 64-bit mail server has extremely high load and we don't know how to isolate it. The load was okay will two weeks ago but in the last two weeks it's seriously deteriorated. The mail server has been running for years and we have had sporadic performance issues. Normally we reduce the load by turning off all SPAM checks until the problem is sorted (which sometimes resolves itself). Currently we have turned of real time block lists, SPF checking and we have attempted to turn off SpamAssassin. No matter what we do the SpamAssassin check box stays ticked in the GUI. Out of desperation we have done /etc/init.d/psa-spamassassin stop. For years we haven't been able to do SpamAssassin because it kills the server. We would like to use it but performance is more important for now. We cannot turn off Greylisting. The moment we turn off Greylisting our help desk is inandated with calls. Out of desperation we investigated truncating the Greylisting database which is now 2.5 GB big but we abandoned this after noticing turning of Greylisting doesn't improve the performance at all. We have no anti-virus. It's just more load and Dr. Web never really worked that well for us. But we'll try that if it will make a difference. We have implemented Postfix Anvil. This seems to have made the situation worse so we disabled it. We’re not sure if this is the case. Our current mail server is configured to forward all SMTP to a relay server. We did so to reduce the load. This helped a lot because outgoing queues are generally empty. We are running in an Expand configuration. The mail server has about 12 000 accounts of which maybe half are active. We have read through this document: http://www.postfix.org/STRESS_README.html but there are too many settings and we don’t know which ones to choose. Please assist urgently. We need advice on how to fix this problem before all our clients abandon is. The only clue we have is that there are 100s of these processes: 30 13205 1 0 13:18 ? 00:00:00 /usr/lib/plesk-9.0/postfix-queue 127.0.0.1 10027 before-queue 30 13207 1 0 11:38 ? 00:00:00 /usr/lib/plesk-9.0/postfix-queue 127.0.0.1 10027 before-queue 30 13208 1 0 13:18 ? 00:00:00 /usr/lib/plesk-9.0/postfix-queue 127.0.0.1 10026 before-remote 30 13209 1 0 11:38 ? 00:00:00 /usr/lib/plesk-9.0/postfix-queue 127.0.0.1 10026 before-remote 30 13213 1 0 13:18 ? 00:00:00 /usr/lib/plesk-9.0/postfix-queue 127.0.0.1 10027 before-queue

    Read the article

  • Weird IIS with Windows Authentication + IE problem

    - by Paulius Maruška
    I have a website running on IIS and using Windows Authentication. All users that are configured to get access to the site are form a AD domain (not local users). In the properties of a Website, I have set to use the AD domain as the realm. Now, when using Firefox, Safari or Chrome - Everything is fine. When the user tries to open the site, he get's the login box. he enters simply "username" and "password" (let's pretend that it's an actual login and password :P) and he get's into the site. When using IE, however, things get nasty. When the user tries to open the site - he get's the login box. User enters the "username" and "password" again, but those get rejected! And when the second time login box pops up - it has the username filled in as "web-server-domain-name\username" which is wrong, because web-server-domain-name is not the domain where all users reside (it's "ad-domain"). I've spent days trying to figure out what's going on... Note, that if I manually enter "ad-domain\username" - I get accepted into the site without problems. So, my guess is that IE sends wrong username if domain is not specified. Anyway, IE is the only browser that triggers this behavior! Is it possible to do a server-side fix? Maybe it's possible to somehow auto-map the users to AD users? If it's not solvable server-side - is there a client-side fix for this? Thank you. PS: I'm more of a programmer than a sys-admin, so configuring servers isn't the strong side of mine... :P UPDATE: @Evan: Yes, "Digest authentication for Windows domain servers" is also enabled. @Eric: IIS version is 6.0. The authentication methods enabled are: Integrated and digest - all other methods are disabled. As for the security log. I looked at it, when doing "username" and "password" login in Chrome/Firefox and when doing "ad-domain\username" and "password" login from IE - the generated log messages are the same (I see no difference, anyway). When entering "username" and "password" I don't see any errors in the security (or any other) log, so can't tell what method it's trying to use. UPDATE 2: As suggested by Eric in the comments - I played around with Fiddler... While playing with it, I noticed, that when "username" and "password" is entered in FF and IE - the "Authorization" header value (encrypted) sent by IE is longer (almost two times) than one sent by FF. I tried to disable Windows Integrated authentication and only leave the Digest enabled - that fixed the problem (meaning, IE used the right realm just like other browsers), but that caused bazillion other problems with my site, because with Digest - user impersonation on the server doesn't work (that causes problems, when connecting to database etc). Any ideas?

    Read the article

  • What are incentives (if any) to use WinRT instead of .Net?

    - by Ark-kun
    Let's compare WinRT with .Net .Net .Net is the 13+ years evolution of COM. Three main parts of .Net are execution environment, standard libraries and supported languages. CLR is the native-code execution environment based on COM .Net Framework has a big set of standard libraries (implemented using managed and native code) that can be used from all .Net languages. There are .Net classes that allow using OS APIs. WPF or Silverlight provide a XAML-based UI framework .Net can be used with C++, C#, Javascript, Python, Ruby, VB, LISP, Scheme and many other languages. C++/.Net is a variation of the C++ language that allows interaction with .Net objects. .Net supports inheritance, generics, operator and method overloading and many other features. .Net allows creating apps that run on Windows (XP, 7, 8 Pro (Desktop and Metro), RT, CE, etc), Mac OS, Linux (+ other *nix); iOS, Android, Windows Phone (7, 8); Internet Explorer, Chrome, Firefox; XBox 360, Playstation Suite; raw microprocessors. There is support for creating games (2D/3D) using any managed language or C++. Created by Developer Division WinRT WinRT is based on COM. Three main parts of WinRT are execution environment, standard libraries and supported languages. WinRT has a native-code execution environment based on COM WinRT has a set of standard libraries that more or less can be used from WinRT languages. There are WinRT classes that allow using OS APIs. Unnamed Silverlight clone provides a XAML-based UI framework WinRT can be used with C++, C#, Javascript, VB. C++/CX is a variation of the C++ language that allows interaction with WinRT objects. Custom WinRT components don't support inheritance (classes must be sealed), generics, operator overloading and many other features. WinRT allows creating apps that run on Windows 8 Pro and RT (Metro only); Windows Phone 8 (limited). There is support for creating games (2D/3D) using C++ only. Ordered by Windows Team I think that all the aspects except the last ones are very important for developers. On the other hand it seems that the most important aspect for Microsoft is the last one. So, given the above comparison of conceptually identical technologies, what are incentives (if any) to use WinRT instead of .Net?

    Read the article

  • inews failed: "No colon-space in "X-MS-TNEF-Correlator:"

    - by wolfgangsz
    We run a news server for our engineering teams, which is also linked to the code repositories (so that all engineers can subscribe to any changes in the repos or just the projects they are interested in). On quite a regular basis (several times a day) I (as the sysadmin for that server) receive bounces from innd with the above as the first line. The news server simply rejects these messages and the articles don't get posted. Here is an example: inews failed: inews: cannot send article to server: 441 437 No colon-space in "X-MS-TNEF-Correlator:" header inews: article not posted -------- Article Contents Path: aminocom.com!ctaylor From: [email protected] (Cameron Taylor) Newsgroups: amino.qa.reports Content-Language: en-US Content-Type: multipart/alternative; boundary="_000_A2AB95742ADD524795C13EDE8F8CCD201A798C0Eukswaex01_" MIME-Version: 1.0 Subject: [QA REPORT] MDK 400 release 3.4.33 **PRE-RELEASE** Message-ID: Date: Thu, 9 Sep 2010 16:15:16 +0000 X-Received: from uk-swa-ex02.aminocom.com (uk-swa-ex02.aminocom.com [10.171.3.10]) by theoline.aminocom.com (8.14.3/8.13.8) with ESMTP id o89GF8tx019494 for ; Thu, 9 Sep 2010 17:15:08 +0100 X-Received: from uk-swa-ex01.aminocom.com ([10.171.3.9]) by uk-swa-ex02 ([10.171.3.10]) with mapi; Thu, 9 Sep 2010 17:15:18 +0100 X-To: QA Reports X-Thread-Topic: [QA REPORT] MDK 400 release 3.4.33 **PRE-RELEASE** X-Thread-Index: ActQOjBdms0CSJsORNSxRIMSZ4H3Ow== X-Accept-Language: en-US, en-GB X-MS-Has-Attach: X-MS-TNEF-Correlator: X-Auto-Response-Suppress: DR, OOF, AutoReply --_000_A2AB95742ADD524795C13EDE8F8CCD201A798C0Eukswaex01_ Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable SQA Test Report [QA REPORT] MDK 400 release 3.4.33 **PRE-RELEASE** Status .... (rest of the message is not important) And yes, quite clearly this header doesn't have anything after the colon. The man page for innd doesn't specify why it rejects these messages, it just says it rejects them. So far I have found out these headers are linked to messages in RTF format (coming from Outlook clients), where normally the formatting information would be stored in a winmail.dat attachment. The clients all use MS Exchange 2010 servers to send their mail (identified above as uk-swa-ex02.aminocom.com) which forwards the message to the news server. Does anybody know what advice I need to give these users to avoid their articles getting bounced? Or can I change the behaviour of innd? Or do I need to filter these headers out before innd processes the articles?

    Read the article

  • Windows 7 Home hangs at "Welcome" screen

    - by White Phoenix
    I'm asking on behalf of a friend who's currently having problems with his machine. Windows 7 Home 32-bit. He's too far away for me to help by going over to his house - I'm helping him over the Internet. This is his current machine: http://www.newegg.com/Product/Product.aspx?Item=N82E16883227134 The only two changes he made to that machine is to swap out the gfx card for a EVGA GTX 460 and the PSU for a Corsair TX650. Here's what happened: He was playing a computer game (fairly CPU/GPU intensive) and had some music going in the background in foobar while playing. Suddenly, he notices the music stopped playing, so he switches to foobar to try to close it, but it freezes up (window won't respond). So he figures it's just foobar having a bad day and force quits that program. Suddenly, his game won't respond, so he force quits that, then the entire computer just went to crap at that point, so he hits the restart button on his machine. Computer POSTS fine, but now he gets stuck at the Windows "welcome" screen (his account is set to auto-login). HD activity light is solid yellow but he doesn't hear HDD activity. He tried booting into Safe Mode - gets stuck at the "welcome screen". Tried a STartup Repair within Windows 7, it found a few problems, but still gets stuck at welcome. I advised him to boot off the DVD - sfc /scannow found nothing (couldn't use the regular /scannow option; says there's a repair pending, had to use use offbootdir/offwindir command switches). Ran startup repair 3 times - found nothing. My friend runs virus/malware scans on a regular basis, so he's fairly sure it's not that either. Right now I'm having my friend run chkdsk /R on the computer while in this Startup Recovery mode - so far it's caught a few bad sectors. However at this point I'm kinda wondering which way to go if chkdsk doesn't fix it. Quick Google search said someone had success by booting Windows with bootlogging on - some others have success with running the aforemented chkdsk, etc. The fact that Windows cannot even boot into Safe Mode concerns me. While we're waiting for chkdsk /R to finish, are there any other options I can give my friend short of reinstalling Windows 7? He has his data on a separate partition so that's not a major problem (though it'll be an annoyance for him). I suspect his hard drive may be having some issues, but my main concern is getting him back up and running before we start diagnosing the hard drive (I may have him run some sort of SMART test utility later).

    Read the article

  • Problem with several USB devices on Windows XP. How to diagnose USB device?

    - by Lukasz Baran
    Hello All, Recently I've bought some electronic devices (like e.g. Sony Ebook reader PRS-600 or HP PDA IPaq) which are connected to PC using USB ports. These devices have own batteries and they are recharged using USB. However, I am experiencing a problem with Ebook Reader and the similiar thing happens with my PDA (but in this case it's more rare). Sometimes, I am unable to make it start recharging, when it's connected to USB port. The LCD diode on my Reader lights up and the icon in the system tray appears. They both indicate that the device has been connected. However, the device should start recharching and it should appear on Windows XP as a 'explorable' device (just like pendrive it offers some HDDs). None of these happens:( This problem appears on two PCs - desktop and laptop. Both have Windows XP SP3 installed. And what's interesting and surprising to me is the fact that these devices sometimes work without any problems. I mean, I just plug them in and they work properly. Besides that, there are some computers (I have tested a variety of possible conditions) on which the problem does not appear! I am confused and, on the other hand, determined to find out what is the real cause of such strange behavior. Maybe I'm wrong but I always assumed that I'm using deterministic machines, but the described situation makes me feel like I was dreaming:) And that's why I'm asking you here. Are there any tools that could help me diagnose USB ports? Or maybe anyone knows the solution? I have a lot experience with low level programming (mostly from the times of MS-DOS applications), but I'm not an expert when it comes to WinXP technology. If this was a problem with network connection, I would know what to do - I know how to track network traffic, but with USB ports I feel powerless. I have a feeling that the problem may lie in a way WinXP handles USB devices. Personally, I hate auto-discovery and P'n'P features available in Windows. I know that some of may suggest to move to Linux or other *Nix system. Unfortunately, I have to use WinXP in my professional life, so it's rather impossible. Besides that, I don't consider XP as a total disaster. Any help from you will be appreciated!

    Read the article

  • Why Photoshop CS5's photomerge's result immediately disappear?

    - by koiyu
    I have a bunch of JPG-files which I want to stitch together with Photoshop's Photomerge function. I choose File → Automate → Photomerge... and browse for the files. Photoshop opens the files and starts analyzing. I see the process bar filling and different phases are mentioned on the process bar. Nothing weird there. When the merging is done (and if I don't blink my eyes), I can see layers-palette is populated with the chosen files and, by quickly judging from the layer thumbnails, they're properly aligned. Sometimes the image window itself can be seen, but not always. Problem is that the layers and the image disappear in a flash. There is no error message. Everything is like prior starting the photomerge. No file has been changed. I could continue to use Photoshop normally. This is what I've tried so far: Loaded folder which has 38 JPG images, 4272 x 2848 and ˜ 5 megabytes per file Loaded the same files, but chose Use Files instead of Use Folder in the photomerge's window Loaded 19 JPG images, 4272 x 2848 and ˜ 5 megabytes per file Loaded 10 JPG images, ⇑ see above Loaded 5 JPG images, see above Loaded 3 JPG images, see above Scaled the images to 2256 x 1504 and ˜< 1 megabytes per file Loaded in a set of 38, 19, 10, 5, 3 Following steps are tested with these smaller files and with a set of 5 images Read Adobe's forums and reduced the amount of RAM Photoshop uses gradually from ˜ 80 % to 50 % (though I didn't understand the logic behind this) Would've reduced cache tile size to 128K, but it was set so already Disabled OpenGL Scaled the images to 800 x 533 and ˜ 100 kilobytes per file, loaded a set of 5 Read more unanswered threads around the internet In between each test I closed and reopened Photoshop. This is the first time I've even tried using photomerge. Am I doing something wrong? How can I locate what is the problem? How do I fix this? Photoshop is 64 bit Extended CS5 version. I'm on a mid-2010 quad-core (i5) iMac with up-to-date Mac OS X 10.6.6. Edit: Weird. First loading the images into one file via File → Scripts → Load Files into Stack… and then using Edit → Auto-Align Layers…, which, effectively, is the same as photomerge (even the dialog looks kind of the same), works! Even with the original JPGs without any issues. This doesn't fix photomerge, though.

    Read the article

  • Cobbler 2.2.2 problems

    - by Peter
    I have setup a dedicated LAN for Cobbler tests. My setup is: Cobbler server: openSUSE 12.3, cobbler 2.2.2 (from openSUSE repos) Imported distros: Centos 6.5, Red Hat 6.5, Red Hat 7.0, openSUSE 13.1 Target Machine: VMs in a Windows 7 Virtualbox Systems provisioning works OK, but I have some problems. The first one is that cobbler does not honor the "pxe_just_once: 1" setting. When the setup of the target OS is finished, after the reboot the target systems continues to PXE boot! The second problem is that the target server is not correctly configured! See my setup: cobbler system report --name=test Name : test TFTP Boot Files : {} Comment : Fetchable Files : {} Gateway : 192.168.0.1 Hostname : testcob1.example.com Image : IPv6 Autoconfiguration : False IPv6 Default Device : Kernel Options : {} Kernel Options (Post Install) : {} Kickstart : <<inherit>> Kickstart Metadata : {} LDAP Enabled : False LDAP Management Type : authconfig Management Classes : [] Management Parameters : <<inherit>> Monit Enabled : False Name Servers : ['192.168.0.1', '8.8.8.8'] Name Servers Search Path : [] Netboot Enabled : False Owners : ['admin'] Power Management Address : Power ID : Power Password : Power Management Type : ipmitool Power Username : Profile : RHEL-6.5-x86_64 Proxy : <<inherit>> Red Hat Management Key : <<inherit>> Red Hat Management Server : <<inherit>> Repos Enabled : False Server Override : <<inherit>> Status : testing Template Files : {} Virt Auto Boot : <<inherit>> Virt CPUs : <<inherit>> Virt Disk Driver Type : <<inherit>> Virt File Size(GB) : <<inherit>> Virt Path : <<inherit>> Virt RAM (MB) : <<inherit>> Virt Type : <<inherit>> Interface ===== : eth0 Bonding Opts : Bridge Opts : DHCP Tag : DNS Name : Master Interface : Interface Type : IP Address : 192.168.0.200 IPv6 Address : IPv6 Default Gateway : IPv6 MTU : IPv6 Secondaries : [] IPv6 Static Routes : [] MAC Address : Management Interface : True MTU : Subnet Mask : 255.255.255.0 Static : True Static Routes : [] Virt Bridge : So, although I have setup the hostname and the network interface of the target system, after the setup, the hostname is set to localhost.localdomain and eth0 is configured as a DHCP not static! How can I find the problem and fix it? Note that I have synced and restarted cobbler a couple of times, but the problems persists.

    Read the article

  • Virtualbox - routing subnet to bridge adapters

    - by user42384
    Hello, I have set up a Debian Lenny box with 3 vbox Lenny machines running eth0 of the host in bridged mode (on virtualbox 3.1.6). When testing in my local LAN, this all worked perfectly well and traffic flowed to and from the IPs of the virtual machines as it should. However, now that it's in its co-lo home, the networking setup is a bit different, and I'm unable to get traffic to flow to the vboxes properly. Specifically, the host has its own Primary IP, and I have a separate subnet of 8 (6 usable) IPs routed to the box for use by the vboxes. So, eth0 on host is: Machine IP: 2x.x.x.137 Gateway IP: 2x.x.x.138 Subnet Msk: 255.255.255.252 Subnet for vboxes is Subnet: 2x.x.x.240/29 Netmask: 255.255.255.248 vbox1 is configured to 2x.x.x.241 on eth0 as follows: auto eth0 iface eth0 inet static address 2x.x.x.241 netmask 255.255.255.248 Setting up a virtual interface (eth0:0) on the host with one of these subnet IPs allows me to ping to that address only from vbox1, and it allows me to ping vbox1 from the host. I can also ping that virtual interface perfectly well from outside, so the IPs are definitely landing at my machine. It seems I'm missing some sort of routing instruction either on the host or vbox1 to get traffic moving between the subnet and the default gateway, but I can't seem to figure out what it should be, or what glaringly obvious thing i'm missing. Most of my obvious attempts (the gw of eth0, the ip of eth0) were rejected by route command with SIOCADDRT: No such device (eg - i can't find it). I tried setting vbox1 to bridge on eth0:0, but this was not an acceptable device name and VBoxHeadless refused to start. The physical machine does have an unused physical NIC at eth1 that can be used if necessary for something or other. Host machine is running iptables configured by ferm, have experimented with it allowing forwarding for that subnet, but I wouldn't have thought this was necessary given the nature of the virtualbox devices (nor did it actually work). Clearing out all of these rules for a blank iptables set does not resolve the issue. (you can see ferm generated iptables at http://codedumper.com/ojaze) Thanks for any help you can give... Patrick

    Read the article

  • Frequent and weird wifi disconnections

    - by Sidou
    How would you explain, troubleshoot (and solve) the following problem? Wifi ADSL modem router D-link 2640R installed in living room at about 1.8m height. Working fine, synchronising and getting/serving stable internet connection. First situation: -Laptop 01 in other end of the house, let's say in room01 southern to the living room, distant by about 15m. Getting stable signal of good to very good quality. No disconnection. -Laptop 02 in room02 opposite to room01 (5m West) which makes it almost at the same distance and direction from the router located 15m North. Getting stable signal of good to very good quality. No disconnection. Second situation: -Laptop 01 moved to room03 Northern to the living room (actually just 3m behind the wall where the router lies). Getting stable signal of excellent quality. No disconnection. -Laptop 02 still in room02 but now experiences frequent disconnections (actually almost impossible to get the Internet even though the signal level is still very good. Either no Internet with the wifi icon appearing connected to access point or no connection established at all which happens every 2 minutes and that means virtually no Internet at all as I can just get a timeframe of 1 minute or so to load any website or even get to the router's web based control panel. If Laptop 01 is completely shut down or its wifi adapters shut down or even still working but its wifi MAC address forbidden, then Laptop 02 has no problem at all. If Laptop 02 is moved to a nearer location to the router, in the living room for instance, then no connection problem occurs even if Laptop 01 is also connected. And also if we move back Laptop 01 to its original location (room 01), then no problem as well. I'm completely lost and don't know how to address this issue. I tried to change the Wifi channel and even tried the auto channel scan but that didn't solve it. I know that the problem is probably coming from Laptop 01 being in its new location or some sort of interference as the problem occurs only under the described condition but I have no idea how to solve it! I also scanned the neighborhood for wifi jam using InSSIDer, there are few other access points but they don't seem to affect the situation. Any ideas about the steps to follow or tools to use ?

    Read the article

  • How to remove RAID flag on unstriped drive without losing data?

    - by Alex Folland
    I have a Gigabyte Z68X-UD4-B3 motherboard. It advertises this new thing called "XHD", which is like RAID but makes a SSD and traditional-style drive work together to enable high speed with high capacity. I don't want to use this feature, and I already have Windows 7 64 installed without using this feature. When I first installed my 2 hard drives (1 SSD and 1 traditional-style drive) in my machine and booted it up for the first time, it ran a program from the mobo that asked me if I wanted to set up XHD. Thinking it would go to some config screen, I said yes. It immediately started doing something with my drives and finished. I considered that strange, but figured it wouldn't matter when I simply install Windows onto my SSD only. I now have my BIOS and Windows running in AHCI mode with no RAID arrays and separate drives. My SSD is one of those new Corsair Force GT drives which loses power every so often, causing Windows to BSOD. I've figured everything out about this problem, including installing the latest firmware from Corsair, and the only way to fix it at this point is by installing Intel Rapid Storage Technology to control AHCI instead of Windows, since the Windows AHCI driver disables the drive's power every once in a while and can't be configured not to do so. I've tried installing Intel Rapid Storage Technology. When I reboot my machine after doing so, it BSODs just after the Windows logo. I've figured out this is because my SSD and my traditional drive are flagged as RAID, as seen in the "Intel Matrix Storage Manager" program found by switching the BIOS hard drive handling to "RAID" mode. This is due to the XHD auto-config program I mentioned earlier. Normally, the BIOS is set to AHCI, and when the drives boot in AHCI mode, they work perfectly. So, I've concluded the data is stored in AHCI mode but the drives' flags are set to RAID. I've figured out that I can accomplish my objective by using the "Intel Matrix Storage Manager" program on the mobo (with "Reset disks to non-RAID"), but doing so would cause it to completely wipe the drives I select. I want to simply toggle these flags from RAID to AHCI so Intel Rapid Storage Technology doesn't fail and cause a BSOD upon booting, but without wiping the drives.

    Read the article

  • Routing with VPN and asymmetric communication

    - by Louis
    I'm stumbling on a problem that requires your advice. Keywords : networking, route, openVPN Problem : I have a local network with several physical servers and VMs. These machines have ip's in the range 10.10.x.x. I can access these machines from the Internet with the help of openVPN. These machines can : access each other within the local 10.10.x.x subnet access the Internet via the VPN can themselves be accessed (via SSH) from the Internet via the VPN. There is one machine however that behaves strangely and I don't know why. I can SSH into this machine from anywhere via SSH and I can also PING it from anywhere (including the Internet). However from this machine (i.e. when logged into it) I cannot access the Internet or ping machines outside the local network. In other words it will not go beyond the VPN. My question is why? Here are some technical details: The machine's Network Config (running Debian 6.0.3): allow-hotplug eth0 iface eth0 inet static address 10.10.10.200 netmask 255.255.0.0 network 10.10.10.0 broadcast 10.10.10.255 gateway 10.10.10.200 The machine's Routing : Destination Gateway Genmask Flags MSS Window irtt Iface 127.0.0.1 0.0.0.0 255.255.255.255 UH 0 0 0 lo 10.10.0.0 10.10.10.250 255.255.0.0 UG 0 0 0 eth0 10.10.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth0 0.0.0.0 10.10.10.250 0.0.0.0 UG 0 0 0 eth0 0.0.0.0 10.10.10.200 0.0.0.0 UG 0 0 0 eth0 The VPN's Network Config (running Debian 6.0.3): # This is the local network interface auto eth1 allow-hotplug eth1 iface eth1 inet static address 10.10.10.250 netmask 255.255.0.0 broadcast 10.10.10.255 gateway 10.10.10.250 The VPN's routing table Destination Gateway Genmask Flags MSS Window irtt Iface 10.10.0.0 0.0.0.0 255.255.255.0 U 0 0 0 tun0 private 0.0.0.0 255.255.255.0 U 0 0 0 eth0 10.10.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth1 0.0.0.0 10.10.10.250 0.0.0.0 UG 0 0 0 eth1 0.0.0.0 private 0.0.0.0 UG 0 0 0 eth0 net.ipv4.ip_forward = 1 on both machines. there are no iptables set anywhere. Thanks in advance for any feedback.

    Read the article

  • Excel 2010: dynamic update of drop down list based upon datasource validation worksheet changes

    - by hornetbzz
    I have one worksheet for setting up the data sources of multiple data validation lists. in other words, I'm using this worksheet to provide drop down lists to multiple other worksheets. I need to dynamically update all worksheets upon any of a single or several changes on the data source worksheet. I may understand this should come with event macro over the entire workbook. My question is how to achieve this keeping the "OFFSET" formula across the whole workbook ? Thx To support my question, I put the piece of code that I'm trying to get it working : Provided the following informations : I'm using such a formula for a pseudo dynamic update of the drop down lists, for example : =OFFSET(MyDataSourceSheet!$O$2;0;0;COUNTA(MyDataSourceSheet!O:O)-1) I looked into the pearson book event chapter but I'm too noob for this. I understand this macro and implemented it successfully as a test with the drop down list on the same worksheet as the data source. My point is that I don't know how to deploy this over a complete workbook. Macro related to the datasource worksheet : Option Explicit Private Sub Worksheet_Change(ByVal Target As Range) ' Macro to update all worksheets with drop down list referenced upon ' this data source worksheet, base on ref names Dim cell As Range Dim isect As Range Dim vOldValue As Variant, vNewValue As Variant Dim dvLists(1 To 6) As String 'data validation area Dim OneValidationListName As Variant dvLists(1) = "mylist1" dvLists(2) = "mylist2" dvLists(3) = "mylist3" dvLists(4) = "mylist4" dvLists(5) = "mylist5" dvLists(6) = "mylist6" On Error GoTo errorHandler For Each OneValidationListName In dvLists 'Set isect = Application.Intersect(Target, ThisWorkbook.Names("STEP").RefersToRange) Set isect = Application.Intersect(Target, ThisWorkbook.Names(OneValidationListName).RefersToRange) ' If a change occured in the source data sheet If Not isect Is Nothing Then ' Prevent infinite loops Application.EnableEvents = False ' Get previous value of this cell With Target vNewValue = .Value Application.Undo vOldValue = .Value .Value = vNewValue End With ' LOCAL dropdown lists : For every cell with validation For Each cell In Me.UsedRange.SpecialCells(xlCellTypeAllValidation) With cell ' If it has list validation AND the validation formula matches AND the value is the old value If .Validation.Type = 3 And .Validation.Formula1 = "=" & OneValidationListName And .Value = vOldValue Then ' Debug ' MsgBox "Address: " & Target.Address ' Change the cell value cell.Value = vNewValue End If End With Next cell ' Call to other worksheets update macros Call Sheets(5).UpdateDropDownList(vOldValue, vNewValue) ' GoTo NowGetOut Application.EnableEvents = True End If Next OneValidationListName NowGetOut: Application.EnableEvents = True Exit Sub errorHandler: MsgBox "Err " & Err.Number & " : " & Err.Description Resume NowGetOut End Sub Macro UpdateDropDownList related to the destination worksheet : Sub UpdateDropDownList(Optional vOldValue As Variant, Optional vNewValue As Variant) ' Debug MsgBox "Received info for update : " & vNewValue ' For every cell with validation For Each cell In Me.UsedRange.SpecialCells(xlCellTypeAllValidation) With cell ' If it has list validation AND the validation formula matches AND the value is the old value ' If .Validation.Type = 3 And .Value = vOldValue Then If .Validation.Type = 3 And .Value = vOldValue Then ' Change the cell value cell.Value = vNewValue End If End With Next cell End Sub

    Read the article

  • cd Command Linux and Mystery Flags

    - by Jason R. Mick
    Platform: CentOS 6.2 Shell:tcsh I'm playing around with cd for a BASH script, and noticed the wondrous cd - option, but was left with many questions... Why the cd -? Isn't this redundant with cd ..? EDIT [As FatalError points out, these two commands don't do the same things... so the answer is "no"] Can you delve farther back into your history with - flag, a la in a browser? e.g. When I type cd -, it takes me to my previous directory, but then if I enter that command again, it takes me to the directory I just came from, creating a sort of loop. Is a shorthand for going back multiple levels supported?EDITI realize I can go back with cd .., but was hoping this could be a gateway to a less verbose deep back, e.g. cd -3 vs. cd ../../../ ... hopefully that clarifies what I'm asking....EDIT2As to the current feedback, while .. is a special directory, I don't see a reason why the built-in cd to the terminal couldn't use a shorthand for ../../ ... ../ e.g. cd ..5 or why the built-in also couldn't have a history (a la auto pushd/popd) that could be turned on and used like cd -3. I get that this could be somewhat of security/privacy risk, but I don't see how it's any worst than storing a command history, which most shells/terminals do. The manpage for cd, accessible via man cd and help cd (it's the same for either command), only lists -L and -P flags. However when I type in cd --help it outputs Usage: cd [-plvn][-|<dir>].. Am I right in assuming the other flags and the - (back) option are nonstandard? What are the -n and -v flags for? Both seem to take me back to my home directory, that's all I've been able to figure out via experimentation. A quick read on web resources [1][2] offered just the same sort of info that the man page did and didn't answer my questions. Note: The second Linux-centric resource above claimed cd only had two options (obviously not true in current CentOS) hence my assumption that this functionality could be non-standard.

    Read the article

  • Caching issue with Centos forwarding DNS server

    - by Paddington
    I installed a Forwarding DNS server on Centos 5.10 and it is resolving addresses e.g google.com. When I stopped named (service named stop) and tried to dig (dig @localhost A google.com) there was a failure to resolve the address. I checked and see the caching daemon nscd is running. Does this mean the server is not caching at all? How can I get it to cache? named.conf options { // Those options should be used carefully because they disable port // randomization // query-source port 53; // query-source-v6 port 53; // Put files that named is allowed to write in the data/ directory: listen-on port 53 {127.0.0.1; 10.0.0.4;}; directory "/var/named"; // the default dump-file "/var/named/chroot/var/named/data/cache_dump.db"; statistics-file "/var/named/chroot/var/named/data/named_stats.txt"; memstatistics-file "/var/named/chroot/var/named/data/named_mem_stats.txt"; // allow-query {localhost; 192.168.0.0/24; 10.0.0.0/8;}; recursion yes; //allow-query { localhost; 10.0.0.0/8;}; allow-query { localhost; any; }; allow-query-cache { localhost; any; }; forward only; forwarders {8.8.8.8; 8.8.4.4;}; dnssec-enable yes; // dnssec-lookaside auto; /* Path to ISC DLV key */ // bindkeys-file "/etc/named.iscdlv.key"; // managed-keys-directory "/var/named/dynamic"; }; logging { channel default_debug { file "data/named.run"; severity dynamic; }; }; **

    Read the article

  • mdadm raid1 fails to resync

    - by JuanD
    Hello, I'm trying to solve this problem I'm having with an mdadm raid1. I have an ubuntu 9.04 server running on a software 2-drive raid1 with mdadm. Yesterday, one of the drives failed, and so I replaced it with a brand new drive of the same size. I removed the faulty drive, copied the partition from the remaining good drive to the new drive and then added it to the raid. It re-synced and the system worked fine, until the drive that hadn't failed, was also labeled failed. Now I had the raid running solely on the new drive. So I purchased another drive and repeated the procedure above. So now I had 2 brand new drives and the raid was syncing. However, after a few minutes I checked /proc/mdstat and the raid was no longer syncing. mdadm --detail /dev/md1 shows: (sdb is the first new drive, and sdc is the second new drive) root@dola:/home/jjaramillo# mdadm --detail /dev/md1 /dev/md1: Version : 00.90 Creation Time : Sat Dec 20 00:42:05 2008 Raid Level : raid1 Array Size : 974711680 (929.56 GiB 998.10 GB) Used Dev Size : 974711680 (929.56 GiB 998.10 GB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 1 Persistence : Superblock is persistent Update Time : Wed Jun 2 10:09:35 2010 State : clean, degraded Active Devices : 1 Working Devices : 2 Failed Devices : 0 Spare Devices : 1 UUID : bba497c6:5029ba0b:bfa4f887:c0dc8f3d Events : 0.5395594 Number Major Minor RaidDevice State 2 8 35 0 spare rebuilding /dev/sdc3 1 8 19 1 active sync /dev/sdb3 I've tried removing and re-adding the drive a few times, but the same thing happens. The raid fails to resync. I've looked at /var/log/messages, and found the following: Jun 2 07:57:36 dola kernel: [35708.917337] sd 5:0:0:0: [sdb] Unhandled sense code Jun 2 07:57:36 dola kernel: [35708.917339] sd 5:0:0:0: [sdb] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE Jun 2 07:57:36 dola kernel: [35708.917342] sd 5:0:0:0: [sdb] Sense Key : Medium Error [current] [descriptor] Jun 2 07:57:36 dola kernel: [35708.917346] Descriptor sense data with sense descriptors (in hex): Jun 2 07:57:36 dola kernel: [35708.917348] 72 03 11 04 00 00 00 0c 00 0a 80 00 00 00 00 00 Jun 2 07:57:36 dola kernel: [35708.917357] 00 43 9e 47 Jun 2 07:57:36 dola kernel: [35708.917360] sd 5:0:0:0: [sdb] Add. Sense: Unrecovered read error - auto reallocate failed So it looks like there's some kind of error on sdb (the first new drive). My question is, what would be the best approach to get the raid up and running again? I've thought about dd'ing the /dev/md1 to a blank hard drive, then re-doing the raid from scratch and loading the data back, but there could be an easier solution.. Any help would be appreciated.

    Read the article

  • Why Photoshop CS5's photomerge's result immediately disappear?

    - by koiyu
    I have a bunch of JPG-files which I want to stitch together with Photoshop's Photomerge function. I choose File → Automate → Photomerge... and browse for the files. Photoshop opens the files and starts analyzing. I see the process bar filling and different phases are mentioned on the process bar. Nothing weird there. When the merging is done (and if I don't blink my eyes), I can see layers-palette is populated with the chosen files and, by quickly judging from the layer thumbnails, they're properly aligned. Sometimes the image window itself can be seen, but not always. Problem is that the layers and the image disappear in a flash. There is no error message. Everything is like prior starting the photomerge. No file has been changed. I could continue to use Photoshop normally. This is what I've tried so far: Loaded folder which has 38 JPG images, 4272 x 2848 and ˜ 5 megabytes per file Loaded the same files, but chose Use Files instead of Use Folder in the photomerge's window Loaded 19 JPG images, 4272 x 2848 and ˜ 5 megabytes per file Loaded 10 JPG images, ⇑ see above Loaded 5 JPG images, see above Loaded 3 JPG images, see above Scaled the images to 2256 x 1504 and ˜< 1 megabytes per file Loaded in a set of 38, 19, 10, 5, 3 Following steps are tested with these smaller files and with a set of 5 images Read Adobe's forums and reduced the amount of RAM Photoshop uses gradually from ˜ 80 % to 50 % (though I didn't understand the logic behind this) Would've reduced cache tile size to 128K, but it was set so already Disabled OpenGL Scaled the images to 800 x 533 and ˜ 100 kilobytes per file, loaded a set of 5 Read more unanswered threads around the internet In between each test I closed and reopened Photoshop. This is the first time I've even tried using photomerge. Am I doing something wrong? How can I locate what is the problem? How do I fix this? Photoshop is 64 bit Extended CS5 version. I'm on a mid-2010 quad-core (i5) iMac with up-to-date Mac OS X 10.6.6. Edit: Weird. First loading the images into one file via File → Scripts → Load Files into Stack… and then using Edit → Auto-Align Layers…, which, effectively, is the same as photomerge (even the dialog looks kind of the same), works! Even with the original JPGs without any issues. This doesn't fix photomerge, though.

    Read the article

< Previous Page | 404 405 406 407 408 409 410 411 412 413 414 415  | Next Page >