Search Results

Search found 23949 results on 958 pages for 'test test'.

Page 755/958 | < Previous Page | 751 752 753 754 755 756 757 758 759 760 761 762  | Next Page >

  • SQL Server Replication Backup

    - by user18039
    Hi We have a new system that runs on SQL Server 2008 r2 64-bit. There is a primary on-line transactional processing (OLTP) database that accepts a high volume of updates from several thousand Point of Sale systems at stores around the country. In order to protect this vital function, I have decided to introduce a dedicated reporting database server - from which multiple users will run some pretty complex reports. I realise that there were a number of choices but I decided to use Transaction Replication as the mechanism for copying the data from the OLTP database to the new reporting database - one way replication. The solution has worked well in test. I'm now being asked what changes need to be made to the backup policy to cover the architectural changes. I have read pages such as MSDN:Strategies for Backing Up and Restoring Snapshot and Transactional Replication but I think these are overkill for my solution. In fact, my current thinking is that we simply need to continue making backups of the OLTP data and logs. If the Reporting db or any of the system replication (eg distribution) databases fail then it's no big deal - we can clear all down then re-create the replication. I realise that taking a complete snapshot of the OLTP would be time consuming (approx 5 hours) but I'd be more relaxed about this that trying to restore backups of the various data and log files in the correct sequence. My view is that the complex strategies set out in the MSDN article would only be the way to go for a more complex replication solution than I have, eg if there were multiple subscribers with 2-way replication. Would you agree? I'd be grateful for any advice. Many thanks, Rob.,

    Read the article

  • Viability of Mac OS X 10.9 Time Machine Server in office environment

    - by user197609
    Currently we have about 20 Mac OS 10.9 MacBook Pros (almost all with SSDs) backing up to individual USB drives. I'd like to consolidate these to one drobo thunderbolt drive array attached to a Mac Mini server (running 10.9 server) using time machine server. My question is, will this scale to 20 users? Examples I have seen seem to be 5 or 6 users tops, and this isn't easy for me to test (I'd rather not ask everyone to backup to the array and then switch back to USB drives if it brings our network to its knees). My primary concern is saturating our gigabit network, as time machine backs up every hour for every machine, so there would usually be a couple people backing up at any given time. We also have some people occasionally on our 802.11ac network and not on ethernet (usually connected via 802.11n until people upgrade to newer machines), but most of the time people are connected to our thunderbolt displays which have a gigabit ethernet connection on them. Our network topology is one 32 port gigabit switch with 5 smaller gigabit switches at each desk cluster. The mac mini server is connected directly to the top level switch. Update: Failing information from someone who has done this in practice, I suppose my question is really around how switches work. If three or four people are backing up simultaneously, and then other two (different) users transfer a file between each other, will they be able to transfer the file at gigabit speeds?

    Read the article

  • Ubuntu 10.10 - PC shutdown before boot shortly after BIOS loads

    - by clem
    Hi - Since installing Ubuntu 10.10 from Karmic I've started getting problems with starting up the PC. I've done a complete wipe (Boot and Nuke) of the hard drive and reinstalled Ubuntu 10.10 but the problem still occurs. There is no dual boot on the PC, just Ubuntu. Here is the problem: Each morning, when I turn the PC on from being off overnight, the PC starts up and loads the BIOS. I get the following message : "Verifying DMI Pool Data... K8 NPT Data Change...Update New Data to DMI!....... Then poof the computer shuts off. However, after switching the computer back on around 6 or 7 times after it's turned itself off, it will eventually boot up without any problem. Also, once up and running for a while, I can shutdown and restart the PC first time, without any issues. I have also noticed a problem with the USB mouse being recognised and once I finally get the computer booted up, I need to unplug and then plug the mouse back in to get it working. I've opened the PC up and checked the connections (cables, cards and memory) and it all seems fine. The main issue with troubleshooting this problem is I cannot test any suggestions or fixes until the next morning because once the computer is up and running it will remain so! I do not leave the computer on overnight to save energy. So.. is this a hardware / boot software issue? This is a very odd problem and I have googled to no avail. Any suggestions? Many thanks Clem

    Read the article

  • Eclipse CDT wont recognize standard library

    - by Mike G
    I work primarily on my desktop, but I started working on my mac laptop and noticed a problem with eclipse CDT. The standard library was underlined yellow and cout wouldn't work (It wouldn't recognize/find it). I tried restarting the program and that didn't work. I then tried to see if xcode would work, found that the version of xcode was too old, and updated xcode. Eclipse still didn't work, but xcode did. I tried re-installing eclipse to the new version and re-installing cdt. It still wouldn't work. Restarting my computer wont work either. I'm not sure if this helps (or even matters/applies) but when I type g++ --version into terminal, it doesn't work. (I don't know if that matters but some tutorial told me to do that to check if the compiler was working). So, in review, I have: Re-installed eclipse Restarted my computer Re-installed xcode (which I think has all the compilers eclipse uses.) Updated Eclipse Typed g++ --version to test compiler.

    Read the article

  • Alias for Drupal "Sites" folder with Apache on Windows Server 2008

    - by sgtbeano
    I'm having to move a number of sites from a LAMP stack to a WAMP one, provided by Zend, and I've hit a problem. Our architecture is a number of loadbalanced web servers which have their own local webapp drives which are kept in sync with one server performing as the master copy. There is then a separate DFS share provided to all web servers from our pillar san. Usually a Drupal install under our LAMP cluster would have the main Drupal web app in a local HTDOCS mount for each server and the SITES directory within Drupal would then be symlinked out to the DFS or NFS share so that there is a common FILES and TMP directory. The problem I'm having is that there seems to be no equivalent of symlinks on Win Server 2008, shortcuts have a .ink at the end making Apache see them as a distinct file. So I've tried using an alias call in the vhost file like this; <Location /drupal-626/sites> Order deny, allow Allow from all </Location> Alias /drupal-626/sites "Z:\Path to alternate sites directory" The root for this test is; http://main-domain-url/drupal-626/ Unfortunately this isn't work so I'm wondering if any of you have a solution which would work? Many thanks for taking the time to read this.

    Read the article

  • SSL Mail server connection times out on send()

    - by Jivan
    When trying to programmatically send an email from a website of mine, with PHP Pear Mail package with SSL connection, PEAR:Mail replies the following : Failed to connect to example.blabla.net:PORT [SMTP: Failed to connect socket: connection timed out (code: -1, response: )] I looked for similar questions on SO and SF, all the answers asking the OP to test a request on telnet or ssh in command line. So, that is what I did and here is what happens : $ ssh -l myusername -p PORT example.blablabla.net _ Here, '_' in the second line means that NOTHING happens. Indefinitely, which seems coherent with the timeout message I had from PEAR:Mail. So PEAR:Mail seems out of cause. But, what I have to tell you is that yesterday, it just worked. Connection was properly established, mails were properly sent, etc. Just today, it doesn't work anymore and I absolutely don't know why. I restarted Apache (in case an extension was broken), restarted mail services, etc. Still. No effect. Before yesterday (when it worked) and today (when it doesn't anymore), I just didn't touch the server and did nothing on it, simply because I took a day off to write some blog post! Have anyone of you encountered similar problem ? The problem seems quite common, judging after some googling, but the solution doesn't. Thanks for any help ! (note on config : CentOS 6.4 x86_64 with cPanel/WHM)

    Read the article

  • Mercurial hook fails on Windows

    - by Nick Hodges
    I am trying to use the headcount hook (https://bitbucket.org/dgc/headcount/overview) with my main develop repository. I pulled the code and placed it in C:\Python26\Lib\site-packages. I made the following entries into my hgrc file: [hooks] pretxnchangegroup.headcount = python:headcount.headcount.hook [headcount] push_ok = * commit_ok = * warnmsg = %(headcount)d new heads detected. You may not push new heads to this repository. debug = False All this is as per the install instructions. I then cloned the repository, created a branch, committed a change to that branch, and then issued: hg push -f as a test. However, this fails with: C:\junk\htmlwriter>hg push -f pushing to c:\code\htmlwriter searching for changes adding changesets adding manifests adding file changes added 1 changesets with 1 changes to 1 files transaction abort! rollback completed abort: pretxnchangegroup.headcount hook is invalid (import of "headcount.headcou nt" failed) I then ran this: C:\Python26>python c:\Python26\Lib\site-packages\headcount\headcount.py Traceback (most recent call last): File "c:\Python26\Lib\site-packages\headcount\headcount.py", line 2, in <modul e> import mercurial.node ImportError: No module named mercurial.node I'm far from a python expert, so can someone help me figure out how to get the headcount hook to run inside my mercurial environment? Details: Windows 7, Mercurial 1.7.2, TortoiseHg 1.1.7

    Read the article

  • Plesk Uninstall Memory issue

    - by user115079
    I am trying to uninstall plesk from my VPS by running following command: yum remove sw-* psa-* plesk-* when i run this command i get following error: Running rpm_check_debug Running Transaction Test memory alloc (4 bytes) returned NULL. First time when i run above command, this mem alloc (4 bytes) was very big number like (67864987). then i googled it, got some clear/ulimit commands. executed them. rebooted my system. stopped all process and executed this command again. but still getting 4 byte issue. dont know how to get rid of it. I also tried ulimit after reboot but no success and Yes. No swap attached. these are stats of my system [root@vps ~]# free -m total used free shared buffers cached Mem: 384 67 316 0 0 0 -/+ buffers/cache: 67 316 Swap: 0 0 0 top - 21:01:07 up 3:12, 1 user, load average: 0.24, 0.08, 0.03 Tasks: 31 total, 2 running, 29 sleeping, 0 stopped, 0 zombie Cpu(s): 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 393216k total, 69832k used, 323384k free, 0k buffers Swap: 0k total, 0k used, 0k free, 0k cached is there any other alternative to achieve my goal to uninstall plesk? thanks.

    Read the article

  • Make nginx config like apache2 virtualhosts

    - by user2104070
    I have web server with apache2 with many subdomains on it like, domain.com, abc.domain.com, def.domain.com etc. etc. Now I got a new nginx server and want to set it up like apache2, so to test I created configs (2 files in /etc/nginx/sites-available/ and link to them from sites-enabled/) as shown, domain.config: server { listen 80 default_server; listen [::]:80 default_server ipv6only=on; root /srv/www/; index index.html index.htm; # Make site accessible from http://localhost/ server_name domain.com; location / { # First attempt to serve request as file, then # as directory, then fall back to displaying a 404. try_files $uri $uri/ =404; } } abc-domain config: server { listen 80; listen [::]:80; root /srv/www/tmp1/; index index.html index.htm; # Make site accessible from http://localhost/ server_name abc.domain.com; location / { # First attempt to serve request as file, then # as directory, then fall back to displaying a 404. try_files $uri $uri/ =404; } } but when I access with domain.com I am getting index.html from /var/www/tmp1 only. Is there something I'm doing wrong in the nginx config?

    Read the article

  • postfix email gateway

    - by k-h
    I am setting up a postfix email gateway. It will not hold any mail but will accept email for my domain and forward it to another internal mailserver and relay mail out from the internal server. One of the main problems is that I am working on a live running system and this will be an upgrade so I am using a test domain which I will change at some point to the real domain. I tried various methods but found the simplest way (that worked) was to use a script to create an aliases file (from ldap entries). There are various problems with this method. The main one being that the entries can't be of the simple form [email protected] because the gateway doesn't know where to send them. They have to be of the form: [email protected]. What I would like doesn't seem hard but I can't get my head around the postfix documentation. There seem to be various ways but none of them seem to work. Most of the examples I have found on the web assume the mail is going to end up on the server. I want a list of users somewhere, preferably of the form: user1, user2, etc rather than [email protected] (I can easily generate this list) and I would like postfix to forward all email to example.com to a particular server: ie realmailserver.example.com. Can anyone suggest clues as to how I might do this?

    Read the article

  • Experience with asymmetrical (non-identical hardware) SQL Server 2005 / Win 2003 cluster

    - by user24161
    I am reasonably good at dealing with SQL Server clusters; I am wondering if folks have experience, good or bad, using a mix of different models of servers from the same vendor in one SQL 2005 cluster. Suppose: I have one more powerful, more RAM, more shizzle box and one less powerful, less memory, less shizzle box bound together in a 2-node cluster. These would be HP DL380 and 580 machines (not that it should matter) I understand AND automate the process of managing memory for each SQL instance, so there's no memory contention when SQL instances fail over. Basically I am thinking a CLR proc will monitor the instances and self-regulate memory caps on each instance, so that they won't page or step on one another. I get the fact the instances might be slower and or under memory pressure if they share a "lesser" node, and that's OK. The business can deal with a slower instance in a server-problem scenario. Reasonable? Any "gotchas" to watch out for? More info 10/28: doing some experiments with a test cluster I find that reconfiguring max/min memory is OK PROVIDED the instance isn't already under memory pressure. If I torture the system with a huge query that demands a big chunk of RAM, and simultaneously adjust the memory allocation to a smaller value than what is being actively used, it's possible to run the instance out of memory and have it halt and restart itself (unhappy situation). Many ugly out-of-memory messages in the error log, crashing, burning... It's an extreme case, but good to know. Seems, then, that it would only be really safe to set this on startup of the instance, as in have a startup script that says "I am on node1, so my RAM settings are X or I am on node two, so they are Y," like this: http://sqlblog.com/blogs/aaron_bertrand... Update: I am testing a SQL Agent + PowerShell solution described in more detail here.

    Read the article

  • Why Are SPF Records Failing?

    - by robobobobo
    Ok I've been going through various different sites, resources and topics here trying to figure out what is wrong with my SPF records but no matter what I do they don't seem to pass. Here's what I have "v=spf1 +a +mx +ip4:217.78.0.92 +ip4:217.78.0.95 -all" I've tried multiple different tools to check my spf records, some give me a pass, some don't. But I can't send mail to certain google app accounts, they just bounce back all the time which is very annoying. Anyone got any ideas? I have noticed that the source IP address is not the IPV4 addresses I've defined, but Cpanel wouldn't let me add that address into it.. And here's the result of tests I'm getting back from port25.com. I'm running WHM by the way and have enabled spf and dkim. Summary of Results SPF check: fail DomainKeys check: neutral DKIM check: pass Sender-ID check: fail SpamAssassin check: ham Details: HELO hostname: server1.viralbamboo.com Source IP: 2a01:258:f000:6:216:3eff:fe87:9379 mail-from: ###@viralbamboo.com SPF check details: Result: fail (not permitted) ID(s) verified: smtp.mailfrom=###@viralbamboo.com DNS record(s): viralbamboo.com. SPF (no records) viralbamboo.com. 13180 IN TXT "v=spf1 +a +mx +ip4:217.78.0.92 +ip4:217.78.0.95 -all" viralbamboo.com. AAAA (no records) viralbamboo.com. 13180 IN MX 0 viralbamboo.com. viralbamboo.com. AAAA (no records) DomainKeys check details: Result: neutral (message not signed) ID(s) verified: header.From=###@viralbamboo.com DNS record(s): DKIM check details: Result: pass (matches From: ###@viralbamboo.com). ID(s) verified: header.d=viralbamboo.com Canonicalized Headers: content-type:multipart/alternative;'20'boundary="4783D1BE-5685-41CF-B91B-1F15E91DD1E3"'0D''0A' date:Mon,'20'1'20'Jul'20'2013'20'21:30:47'20'+0000'0D''0A' subject:=?utf-8?Q?test?='0D''0A' to:"[email protected]?="'20''0D''0A' from:=?utf-8?Q?Rob_Boland_-_Viralbamboo?='20'<###@viralbamboo.com'0D''0A' mime-version:1.0'0D''0A' dkim-signature:v=1;'20'a=rsa-sha256;'20'q=dns/txt;'20'c=relaxed/relaxed;'20'd=viralbamboo.com;'20's=default;'20'h=Content-Type:Date:Subject:To:From:MIME-Version;'20'bh=CJMO7HYeyNVGvxttf/JspIMoLUiWNE6nlQUg5WjTGZQ=;'20'b=;

    Read the article

  • What's the safest way to kick off a root-level process via cgi on an Apache server?

    - by MartyMacGyver
    The problem: I have a script that runs periodically via a cron job as root, but I want to give people a way to kick it off asynchronously too, via a webpage. (The script will be written to ensure it doesn't run overlapping instances or such.) I don't need the users to log in or have an account, they simply click a button and if the script is ready to be run it'll run. The users may select arguments for the script (heavily filtered as inputs) but for simplicity we'll say they just have the button to choose to press. As a simple test, I've created a Python script in cgi-bin. chown-ing it to root:root and then applying "chmod ug+" to it didn't have the desired results: it still thinks it has the effective group of the web server account... from what I can tell this isn't allowed. I read that wrapping it with a compiled cgi program would do the job, so I created a C wrapper that calls my script (its permissions restored to normal) and gave the executable the root permissions and setuid bit. That worked... the script ran as if root ran it. My main question is, is this normal (the need for the binary wrapper to get the job done) and is this the secure way to do this? It's not world-facing but still, I'd like to learn best practices. More broadly, I often wonder why a compiled binary is more "trusted" than a script in practice? I'd think you'd trust a file that was human-readable over a cryptic binaryy. If an attacker can edit a file then you're already in trouble, more so if it's one you can't easily examine. In short, I'd expect it to be the other way 'round on that basis. Your thoughts?

    Read the article

  • Exchange 2010 mail routing with Hub Transport in multiple sites

    - by jmreicha
    I have two separate physical sites, Site A and Site B. In site A, I have following: 2 CAS servers 2 Hub Transport servers 2 Mailbox servers 2 Edge servers In site B, I have the following: 1 CAS 1 Hub 1 Mailbox 1 Edge Currently everything is working out of site A. That is, all users are housed on mailboxes that are in site A and all inbound mail flow is pointing to site A. I would eventually like to be able to move some of the mailboxes to site B without causing a disruption for resliency and redundancy purposes but I am not quite sure how to go about setting this up or if it is even possible. So far I have created an Edge subscription in site B and am able to send emails out from test accounts set up with mailboxes on the site B Mailbox server. However, I am unable to receive incoming mail messages and am confused. So I'm thinking incoming mail messages are still being directed to site A and then they are getting stuck because there is no way to route the mail to the site B mailboxes. Is this assumption correct? I am unfamiliar with mail flow and routing so I am not really sure what I need to be looking at? Would I add the site B hub transport to the Edge subscription in site A? Or I guess more specifically, how would I go about enabling communication and mail flow between mailboxes split up on site A and B?

    Read the article

  • less maximum buffer size?

    - by Tyzoid
    I was messing around with my system and found a novel way to use up memory, but it seems that the less command only holds a limited amount of data before stopping/killing the command. To test, run (careful! uses lots of system memory very fast!) $ cat /dev/zero | less From my testing, it looks like the command is killed after less reaches 2.5 gigabytes of memory, but I can't find anything in the man page that suggests that it would limit it in such a way. In addition, I couldn't find any documentation via the google on the subject. Any light to this quite surprising discovery would be great! System Information: Quad core intel i7, 8gb ram. $ uname -a Linux Tyler-Work 3.13.0-32-generic #57-Ubuntu SMP Tue Jul 15 03:51:08 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux $ less --version less 458 (GNU regular expressions) Copyright (C) 1984-2012 Mark Nudelman less comes with NO WARRANTY, to the extent permitted by law. For information about the terms of redistribution, see the file named README in the less distribution. Homepage: http://www.greenwoodsoftware.com/less $ lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 14.04 LTS Release: 14.04 Codename: trusty

    Read the article

  • 2.6.9 Kernel on virtual server (non upgradable) - any expected problems?

    - by chris_l
    Hi, I'm considering to rent a virtual server (for me personally). The product I'm currently looking at offers IMO fair pricing, very good hardware etc. The only problem is, that I won't be able to do an upgrade to a newer kernel than 2.6.9 (running Debian Etch). Also, I can't install my own kernel modules. (The server runs with Virtuozzo, so as far as I understand it, it just does some chroot instead of a real virtualization (?)) I want to run GlassFish, Postgres, Subversion, Trac and maybe some other things on it. It will also have to employ a firewall, and provide OpenSSL for https. Ideally, it would also be able to do AIO (asynchronous IO), which could speed up some server I/O. Should I expect problems with that old kernel version, in conjunction with the software I want to install (I'd like to use current versions of the software)? One thing I already found out, is that you can't do everything with iptables, since some kernel modules are missing/things are not build into the kernel. GlassFish v3 appears to run fine at first glance. I was able to test the server for a few hours. Installing my whole setup wasn't feasible in that time, but what I can say is, that it's amazingly fast for an entry-level vserver, especially hard disk and network performance (averaging at ca. 400MBit/s). So if the kernel won't be a problem, I'd really like to take it. Thanks, Chris PS Exact kernel version: 2.6.9-023stab051.3-smp

    Read the article

  • Why is my rsync so slow?

    - by iblue
    My Laptop and my workstation are both connected to a Gigabit Switch. Both are running Linux. But when I copy files with rsync, it performs badly. I get about 22 MB/s. Shouldn't I theoretically get about 125 MB/s? What is the limiting factor here? EDIT: I conducted some experiments. Write performance on the laptop The laptop has a xfs filesystem with full disk encryption. It uses aes-cbc-essiv:sha256 cipher mode with 256 bits key length. Disk write performance is 58.8 MB/s. iblue@nerdpol:~$ LANG=C dd if=/dev/zero of=test.img bs=1M count=1024 1073741824 Bytes (1.1 GB) copied, 18.2735 s, 58.8 MB/s Read performance on the workstation The files I copied are on a software RAID-5 over 5 HDDs. On top of the raid is a lvm. The volume itself is encrypted with the same cipher. The workstation has a FX-8150 cpu that has a native AES-NI instruction set which speeds up encryption. Disk read performance is 256 MB/s (cache was cold). iblue@raven:/mnt/bytemachine/imgs$ dd if=backup-1333796266.tar.bz2 of=/dev/null bs=1M 10213172008 bytes (10 GB) copied, 39.8882 s, 256 MB/s Network performance I ran iperf between the two clients. Network performance is 939 Mbit/s iblue@raven $ iperf -c 94.135.XXX ------------------------------------------------------------ Client connecting to 94.135.XXX, TCP port 5001 TCP window size: 23.2 KByte (default) ------------------------------------------------------------ [ 3] local 94.135.XXX port 59385 connected with 94.135.YYY port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0-10.0 sec 1.09 GBytes 939 Mbits/sec

    Read the article

  • Change the background color of selected text in Google Docs to increase readability [migrated]

    - by gene_wood
    How can I override or change the background color of text selected in Google Docs? It is difficult for me to see the difference and I would like to increase the contrast or difference. After Google restyled Google Docs last year (or earlier this year), I've been unable to see selected text. It's possible this is a visual deficiency with my eyes. In Google Docs, under both Google Chrome (17.0.963.83 (Official Build 127885) m) and Firefox (11.0), when I select text inside a Google Doc, the selected text has a background of color #d6e0f5. Compare this to the default browser background color of #2f65c0. (I determined the color of the selected text background by taking a screenshot and using the color picker tool in Photoshop). I've tested this using a brand new Firefox profile as well as google chrome profile. Here's a section of a screenshot showing the selected text : I've tried using a userscript to override the CSS to go back to the default text selection color using the "Stylish" plugin with this css : ::selection { background:#2f65c0; color:#ffffff; } ::-moz-selection { background:#2f65c0; color:#ffffff; } ::-webkit-selection { background:#2f65c0; color:#ffffff; } This code works on other sites, but I'm unable to get it to work on Google Docs. (I tested on other sites but applying the userscript to a different domain and using bright yellow instead of the default dark blue #2f65c0.) When you use Google Docs, do you have the same color background for selected text or something different? (To test this, browse to docs.google.com , create a document, type text into the document, select the text with the mouse by dragging over it, take a screenshot, load the screenshot up in an image editor and determine the background color of the selected text.) This color differential (between light blue #d6e0f5 and white #fffff) may be easy to see for others and the problem lies with my eyes.

    Read the article

  • Problem with Amiga 1200 accelerator board

    - by cc0
    I just recently walked past a dump, where in the corner of my eye I spotted something that looked like a huge keyboard. I went to take a closer look, and found out that it was an Amiga 1200 with a 030 accellerator board and scala dongle. Jackpot! So anyway; I dried it, cleaned it, it works, but the floppy was not powering on and same with the harddrive. I am using an old Amiga 1200 PSU that was making some strange high pitch noise when I tried to boot the amiga with the harddrive installed in it. I removed the harddrive and it booted fine with the PSU not emitting any detectable noise. However, when I have the 030 installed it sometimes reboots and shows a red "Software Error" screen. I tried removing the memory on the board, same effect. Sometimes it does not boot at all, just gives a black screen. Someone suggested the card had problems with 3.1 roms, but this amiga has only 3.0 roms installed. Does anyone have any apparent theories as to why it seems unstable? I don't have any other Amiga parts to cross-swap with to test a lot of things, so I'd really appreciate some sound input here so I'd know what to look for in order to try fix it. And merry Christmas everyone :]

    Read the article

  • How is it possible to list all folders that a particular user/group has permissions on?

    - by Lord Torgamus
    Is it possible to list all folders/files that a given group has explicit permissions on, for a machine running Windows Server 2003? If so, how? It would be nice to see inherited permissions as well, but I could do with just explicit permissions. A little background: I'm trying to update groups/permissions on a test server. One of the groups, Devs, wasn't implemented correctly when it was created, and my goal is to remove it from the system. It has been replaced by LeadDevelopers, which has permissions on many — but naturally not all — of the same folders. I want to make sure that I don't accidentally orphan any folders or cause any other issues when I remove Devs. It did have some admin-level permissions. EDIT: The answers so far — at least *cacls and AccessEnum — provide a way to find out which groups/users have permissions on known directories/files. I actually want the reverse of this behavior: I know the group, and I'm looking for the directories/files for which the group has permissions. Also, as I noted in a comment, the Devs group is not itself a member of any other group.

    Read the article

  • Read access to Active Directory property (uSNChanged)

    - by Tom Ligda
    I have an issue with read access to the uSNChanged property when doing LDAP searches. If I do an LDAP search with a user that is a member of the Domain Admins group (UserA), I can see the uSNChanged property for every user. The problem is that if I do an LDAP search with a user (UserB) that is not a member of the Domain Admins group, I can see the uSNChanged property for some users (UserGroupA) and not for some users (UserGroupB). When I look at the users in UserGroupA and compare them to the users in UserGroupB, I see a crucial difference in the "Security" tab. The users in UserGroupA have the "Include inheritable permissions from this object's parent" unchecked. The users in UserGroupB have that option checked. I also noticed that the users in UserGroupA are users that were created earlier. The users in UserGroupB are users created recently. It's difficult to quantify, but I estimate the border between creation time between the users in UserGroupA and UserGroupB is about 6 months ago. What can cause the user creation to default to having that security property checked as opposed to unchecked? A while back (maybe around 6 months ago?) I changed the domain functional level from Windows Server 2003 to Windows Server 2008 R2. Would that have had this effect? (I can't exactly downgrade the domain functional level to test it out.) Is this security property actually the cause of the issue with read access to the uSNChanged property on LDAP searches? It seems correlated, but I'm not sure about causation. What I want in the end is for all authenticated users to have read access to the uSNChanged property for all users when doing an LDAP search. I would also be OK if I could grant read access for that property to an AD group. Then I can control access by adding members to the group.

    Read the article

  • How to fix UNMOUNTABLE_BOOT_VOLUME (0x000000ED) on my Windows XP DELL laptop?

    - by Neil
    I have a Dell Latitude D410. Running Windows XP. I am receiving the STOP: 0x000000ED (0X899CF030,0XC0000185,0X00000000,0X00000000) Blue screen. Initially, I tried everything specified with the Microsoft KB articles. At this time, I was able to boot into the general safemode. I pulled the hard drive and was able to run chkdsk on it- it noted that it had fixed some errors, but I was still unable to boot. I put a brand new hard drive in the laptop. Windows XP installation worked up until the reboot, at which time the exact same error message came back up. What I have tried (all since the new hard drive was installed): chkdsk /R All suggested solutions in Microsoft KB articles Reseating RAM Opened laptop, reseated all connectors, looked for signs of damage (saw none) Reset BIOS options to default Ran the basic Dell diagnostics I have looked at the current entry:How can I boot XP after receiving stop error 0x000000ED - I am currently in the process of downloading the Ultimate Boot CD to use as a test, but I am not holding out a lot of hope as I really doubt this brand new Hard Drive is bad. Can anyone think of other areas I am missing? Ran MEMTEST86+ V4.10 for 15 passes (overnight). 0 Errors EDIT: FORMATTING

    Read the article

  • Two folders with identical names causing many problems.

    - by R. A. Chaucer
    In my 'Documents' folder (Win7) I have two folders with names that appear in Explorer to be identical, though their contents are different. I can rename them both to something else (eg: 'Test') and Explorer doesn't complain. The dir listing that cmd.exe and powershell gives me only lists one of them, but also lists this suspicious entry: 20/04/2010 12:16 PM <DIR> ???? Even if I rename the folders to have unique names, one of them still shows up as ???? in cmd.exe. Desktop.ini in my Documents folder doesn't contain anything out of the ordinary. Both folders appear to be read-only in their properties panel, and if I untick the read-only box it will ask me if I want to apply the action recursively, but either way when I close the panel and open it again the folder is once again read-only. They are both set to not inherit permissions. The folder that shows up correctly in the cmd.exe dir listing is the "real" one, the other seems to be automatically created when a program tries to access it. How is this possible?This is driving me nuts!

    Read the article

  • XSL 2.0 unparsed text and formatting

    - by Maha
    I want the unparsed text to be formatted for bold characters or increase font-size based on the tag the example here is for replace the searched word with bold characters Example: test <b> how to <b> when bold <b> when there is more <b> than one place to bold Can you please advice what is wrong here? <xsl:variable name="tcline" select="unparsed-text('generic_tc.txt','UTF-8')"/> <xsl:analyze-string select="$tcline" regex="\'<b>'(.*?)\'<b>'"> <xsl:matching-substring> <xsl:value-of select="replace($tcline,'\"<b>"(.*?)\"<b>"','<em>$1\/em/g;')"/> </xsl:matching-substring> <xsl:non-matching-substring> <xsl:value-of select="."/> </xsl:non-matching-substring> </xsl:analyze-string>

    Read the article

  • Custom attributes in Active Directory - determining usage/function and possible removal options?

    - by HopelessN00b
    I've bumped into a highly-customized Active Directory environment (2003 FL) that's got me wondering if there's any particularly easy way to figure out what a custom attribute's function is, and what, if anything, is "using" that particular attribute. And then what some good options for potentially removing custom attributes from the schema might be. Aside from a restore or starting from scratch. If such an option exists. For example, I think I can be fairly certain what the "isDumbass" attribute with a value of TRUE means, but not so much with "IRPextCONST", containing a value of 393684. Likewise, I'd think it should be pretty safe to delete the "isDumbass" attribute, but would like to a) be sure and b) find out what's querying or updating that value anyway, because I suspect that anything using that attribute might be next on the list of things to remove. Ideally, without having to run a search on the contents of every custom script and bit of source code I can get my hands on, of course. And finally, aside from rebuilding from scratch, or doing an authoritative AD restore from backups that don't exist... is there a way to delete a given custom attribute? (Not blank the value, but actually delete the attribute from the schema - some folks would rather not have attributes like "FaggotMeter" and "DouchebagCounter" hanging around.) I've been able to find and successfully test a method on Windows 2k, but it seems like Microsoft disabled this option in SP4, and the domain in question is a 2003 functional level.

    Read the article

< Previous Page | 751 752 753 754 755 756 757 758 759 760 761 762  | Next Page >