Search Results

Search found 41497 results on 1660 pages for 'fault'.

Page 27/1660 | < Previous Page | 23 24 25 26 27 28 29 30 31 32 33 34  | Next Page >

  • GlusterFS as elastic file storage?

    - by Christopher Vanderlinden
    Is there any way to run GlusterFS in a replicated mode, but with the ability to dynamically scale the volume up and down? Say you have 3 servers all running glusterd. your Gluster volume would have to be setup with replica 3 gluster volume create test-volume replica 3 192.168.0.150:/test-volume 192.168.0.151:/test-volume 192.168.0.152:/test-volume You would then mount it as say \mnt\gfs_test What happens when I want to add 2 more servers to the storage pool and then also use them in this volume? Is there any easy way to expand the volume AND increase that replica count to 5? My end goal is to run this on EC2 instances, say 3 Apache front ends, with the webroot setup on the gluster volume mount. My concern is that if I ever need to spin up a server, I would want the server to not only be an additional Apache front end, but also another server in the gluster file system, adding to fault tolerance as well as possibly giving a slight boost in read speed. Maybe there are better options that would fit the bill here? Thanks.

    Read the article

  • Prevent Amazon EC2 Time zone from reverting back on yum update

    - by D.Tate
    I use an Amazon EC2 server instance that runs a distro called Amazon Linux AMI. (I've read that it is based on CentOS/Red Hat). My specific version is the 2012.09 release. Anyway, I was able to change the time zone about a week ago from the default UTC to America/New_York (which is EST/EDT). The command I used to change it was: ln -sf /usr/share/zoneinfo/America/New_York /etc/localtime ...thanks to this other Server Fault question. At that point, I was able to run date from the the command line, and it correctly displayed the EDT time. And even after EDT "fell back" to EST this past Sunday, I was pleased to find that running date still produced the correct local time. So that was great. However, after running a yum update yesterday, it seems that my time zone got reverted back to plain 'ol UTC. I even checked the last modified time of /etc/localtime file, and indeed it confirmed that it had been modified around the same time I had updated. Is there any way to prevent this from happening again, or will I be stuck resetting the time zone every time I do a yum update?

    Read the article

  • IP6 seems to be enabled - How do I configure it without interfering with IP4?

    - by Mister IT Guru
    I noticed that some of my Centos boxes have IP6 enabled, and seem to have addresses. I have no problem with this, but I would like to get a handle on it, and even connect to them using IP6. This would really help if for any reason DHCP has a hiccup. But I'm a bit lost as to where the configuration on my CentOS box is. (I am also on google researching this, but I like server fault! :) ) I am hoping that I would be able to log into this via the VPN because every now and then that DHCP device has a bad morning, and needs to be restarted. (I'm also looking into this issue, but someone else handles that, management separation gone mad!) It's a remote client, so it would be a lot easier for me to connect to these systems which seem to self configure, to use that as a pivot via ssh tunnels to get to other remote devices to continue to manage them, while out main route is fixed. I guess, my questions are How can I configure IP6 without interfering with IP4, and On CentOS, can I influence this auto configuration I seem to be seeing?

    Read the article

  • RDP Connection to Windows 7 stays really slow

    - by Pavlo
    I have an Issue with connecting to Windows 7 via RDP. I can open an RDP Session, but regardless of any settings, the responce times are really long. This in particulary is the case when opening a web page in a browser. I've tried IE, Firefox and Google Chrome. I also use RDP connection to a Windows 2008 Server from the same client machine, and the speed is very normal with all features turned on. We have Gigabit Ethernet here. So I think it can not be the client's fault. What concerns Windows 7 Machine, I've tried shutting all the sraphic features off and turning the color levels to 256 colors. Result - the same. If I work locally on the machine - I can not see any lags. What else have I tried: Using old RDP 5 Client from Microsoft Setting network autotuninglevel as seen here Do You have some ideas? Thanks in advance! Update the problem seems to be with rendering window contents. All the window borders and pannes are rendered pretty quickly, but the content shows up very slowly. Also mouse movements are recognised by the Win 7 box only after some period. Are there some hidden settings in the RDP, where one could turn some advanced features off or turn some caching on? I use Bitmap Caching, but this apparently doesn't help.

    Read the article

  • Virtualize SBS 2003 - P2V vs migrating to new VM

    - by jlehtinen
    I need to virtualize a SBS 2003 server in my work environment. I need some tips on what people think is the best way to proceed. Background: The SBS 2003 server is the primary DC for the domain and also hosts FTP, RRAS(VPN), DNS, and file shares. Exchange is NOT used, neither is SQL server. DHCP is done via a firewall appliance. I have added a Server 2003 VM to the domain and promoted it to the DC role. AD/DNS is replicating here correctly. This was mainly done to provide fault-tolerance to the domain, I was not intending to make this VM the primary DC. I've already asked about buying upgraded licensing for Server 2008/2012 but was refused due to cost. Options: I see (at least) two routes I could take to complete this. From what I've read option 2 is the "preferred" method, but there's a few steps where I'm not clear on what to expect. Option 1.) P2V the primary DC Power off primary DC Power off secondary DC (to prevent USN rollback in case P2V has issue) P2V (cold clone) primary DC Boot new PDC VM Allow new hardware to detect Remove old NIC hardware from device manager Assign old IPs to new virtual NICs Reboot PDC VM, confirm connectivity and no major issues Power on secondary DC, confirm replication Option 2.) Create new VM, transfer roles, remove original DC from domain Create new VM, install SBS 2003 Do I need the original SBS install discs for this? MS migration doc mentions this. Add VM to domain, promote to DC role Does this start 7 day timer where two SBS servers can be in same domain? Set up RRAS on new VM Set up IIS/FTP on new VM Move file shares to new VM Transfer FSMO roles to new VM DC dcpromo original primary DC out of domain

    Read the article

  • How to view / enumerate / obtain a list of all effective rights / permissions on an Active Directory object?

    - by Laura
    I am new to Server Fault and was hoping to find an answer to a question that I have been struggling with for the past week or so. I have been recently asked by my management to furnish a list of all the effective rights / permissions delegated on the Active Directory object for our Domain Admins group. I initially figured I'd use the Effective Permissions Tab in Active Directory Users and Computers but had two problems with it. The first was that it doesn't seem very accurate and the second was that it requires me to enter the name of a specific user, and it only shows me what it figures are effective permissions for that user. Now, we have more than a 1000 users in our environment so there's no way I can possibly enter 1000 user names one by one. Plus, there is no way to export that information either. I also looked at dsacls from MS but it doesn't do effective permissions. Someone pointed me to a tool called ADUCAdmin but that seems to falsely claim to do effective permissions. Could someone kindly help me find a way to obtain this listing? Basically, I need to generate a list of all the modify effective permissions granted on the Domain Admins group object along with the list of all the admins to which these permissions are granted. In case it helps, I don't need a fancy listing - simple text / CSV output would be enough I would be grateful for any assistance since this is time and security sensitive for us.

    Read the article

  • Distributed storage and computing

    - by Tim van Elteren
    Dear Serverfault community, After researching a number of distributed file systems for deployment in a production environment with the main purpose of performing both batch and real-time distributed computing I've identified the following list as potential candidates, mainly on maturity, license and support: Ceph Lustre GlusterFS HDFS FhGFS MooseFS XtreemFS The key properties that our system should exhibit: an open source, liberally licensed, yet production ready, e.g. a mature, reliable, community and commercially supported solution; ability to run on commodity hardware, preferably be designed for it; provide high availability of the data with the most focus on reads; high scalability, so operation over multiple data centres, possibly on a global scale; removal of single points of failure with the use of replication and distribution of (meta-)data, e.g. provide fault-tolerance. The sensitivity points that were identified, and resulted in the following questions, are: transparency to the processing layer / application with respect to data locality, e.g. know where data is physically located on a server level, mainly for resource allocation and fast processing, high performance, how can this be accomplished? Do you from experience know what solutions provide this transparency and to what extent? posix compliance, or conformance, is mentioned on the wiki pages of most of the above listed solutions. The question here mainly is, how relevant is support for the posix standard? Hadoop for example isn't posix compliant by design, what are the pro's and con's? what about the difference between synchronous and asynchronous opeartion of a distributed file system. Though a synchronous distributed file system has the preference because of reliability it also imposes certain limitations with respect to scalability. What would be, from your expertise, the way to go on this? I'm looking forward to your replies. Thanks in advance! :) With kind regards, Tim van Elteren

    Read the article

  • clocksource tsc unstable

    - by amorfis
    Ok, now I have real server fault ;) After some time from booting (about one minute) my server hangs. All I can do is hard reset. Then after restart in /var/log/kern.log I can find: Jul 29 22:38:57 leonidas kernel: [ 90.729598] longhaul: Failed to set requested frequency! Jul 29 22:38:57 leonidas kernel: [ 90.731252] longhaul: Enabling "Ignore Revision ID" option. Jul 29 22:38:57 leonidas kernel: [ 91.201461] longhaul: Failed to set requested frequency! Jul 29 22:38:57 leonidas kernel: [ 91.201482] longhaul: Disabling ACPI C3 support. Jul 29 22:38:57 leonidas kernel: [ 91.204230] longhaul: Disabling "Ignore Revision ID" option. Jul 29 22:38:58 leonidas kernel: [ 91.416133] longhaul: Failed to set requested frequency! Jul 29 22:38:58 leonidas kernel: [ 91.416152] longhaul: Enabling "Ignore Revision ID" option. Jul 29 22:38:58 leonidas kernel: [ 91.960048] Clocksource tsc unstable (delta = -105611479 ns) I found some resources on the net, and it said to change clocksource, or disable ACPI. I tried disabling ACPI but it didn't help (but I noticed there was longer time before hanging). I can't change clock to hpet, because my system doesn't have such one. Output of cat /sys/devices/system/clocksource/clocksource0/available_clocksource: acpi_pm jiffies tsc My system is ubuntu server on VIA Epia hardware.

    Read the article

  • HAProxy crashes on all requests in 1.5-dev12

    - by Daniel Hough
    I'm having an issue where HAProxy is crashing with no explanation when I switch from 1.4.12 to 1.5-dev12. The reason I'm switching is for the SSL offloading. My config file doesn't have any errors, it's quite simple and it works well with 1.4 - but for some reason when I run it with 1.5-dev12 I see the logs noting that the two backends I have have been set up, and then when I hit one of the frontends, I get an HTTP 400 in the browser and suddenly HAProxy isn't running anymore when I check. I understand that a common workaround to the lack of SSL support for HAProxy is to use Stud, and I may go with that since I am in need of an SSL solution for my service, but before I dele into that world I thought I might see if anybody has experienced the same problems and might know how to fix it. The server is Ubuntu 10.04 and I followed the make instructions on the Exceliance blog here. EDIT: On the advice of Kyle Brandt, I did a bit more investigation. I attached gdb to the haproxy process and when the crash occurred this is what I got: Program received signal SIGSEGV, Segmentation fault. 0x0804e5c2 in dequeue_all_listeners (list=0x9e1a418) at src/protocols.c:184 184 list_for_each_entry_safe(listener, l_back, list, wait_queue) { P.S. HAProxy is awesome, so thank you Exceliance for providing us with something so useful :)

    Read the article

  • Ways to parse NCSA combined based log files

    - by Kyle
    I've done a bit of site: searching with Google on Server Fault, Super User and Stack Overflow. I also checked non site specific results and and didn't really see a question like this, so here goes... I did spot this question, related to grep and awk which has some great knowledge but I don't feel the text qualification challenge was addressed. This question also broadens the scope to any platform and any program. I've got squid or apache logs based on the NCSA combined format. When I say based, meaning the first n col's in the file are per NCSA combined standards, there might be more col's with custom stuff. Here is an example line from a squid combined log: 1.1.1.1 - - [11/Dec/2010:03:41:46 -0500] "GET http://yourdomain.com:8080/en/some-page.html HTTP/1.1" 200 2142 "-" "Mozilla/5.0 (Windows; U; Windows NT 6.1; C) AppleWebKit/532.4 (KHTML, like Gecko)" TCP_MEM_HIT:NONE I'd like to be able to parse n logs and output specific columns, for sorting, counting, finding unique values etc The main challenge and what makes it a little tricky and also why I feel this question hasn't yet been asked or answered, is the text qualification conundrum. When I spotted asql from the grep/awk question, I was very excited but then realised that it didn't support combined out of the box, something I'll look at extending I guess. Looking forward to answers, and learning new stuff! Answers doesn't have to be limited to platform or program/language. For the context of this question, the platforms I use the most are Linux or OSX. Cheers

    Read the article

  • Creating a partitioned raid1 array for booting a debian squeeze system

    - by gucki
    I'd like to have the following raid1 (mirror) setup: /dev/md0 consists of /dev/sda and /dev/sdb I created this raid1 device using mdadm --create --verbose /dev/md0 --auto=yes --level=1 --raid-devices=2 /dev/sda /dev/sdb This gave a warning about metadata being 1.2 and my system might not boot. I cannot use 0.9 because it restricts the size of the raid to 2TB and I assume grub shipped with latest debian (squeeze) should be able to handle metadata 1.2. So then I created the needed partitions like this: # creating new label (partition table) parted -s /dev/md0 mklabel 'msdos' # creating partitions sfdisk -uM /dev/md0 << EOF 0,4096 ,1024,S ; EOF # making root filesystem mkfs -t ext4 -L boot -m 0 /dev/md0p1 # making swap filesystem mkswap /dev/md0p2 # making data filesystem mkfs -t ext4 -L data /dev/md0p3 Then I mounted the root partition, copied a minimal debian install inside and temporary mounted /dev /proc /sys. Afer this I chrooted to the new root folder and executed: grub-install --no-floppy --recheck /dev/md0 However this fails badly with: /usr/sbin/grub-probe: error: unknown filesystem. Auto-detection of a filesystem of /dev/md0p1 failed. Please report this together with the output of "/usr/sbin/grub-probe --device-map=/boot/grub/device.map --target=fs -v /boot/grub" to I don't think it's a bug in grub (so I didn't report it yet) but a fault of mine. So I really wonder how to properly setup my raid1, everything I tried so far failed.

    Read the article

  • WCF FaultContracts not working for Silverlight Client Proxy

    - by sarwara
    We have a Silverlight application client and a WCF Service hosted as Managed Window Service and exposing Service Contracts on BasicHttpBinding. We are sending FaultContract on the wire in case of exception is caught with the WCF Service Code. We are facing following problem as: A. If we have Synconized proxy call (in case of Window or Web Client), we are able to catch the Fault Contract. B. If we are using Silverlight Client which uses Asyncronized calls, we are unable to catch the Fault Contract. We need help on later problem (B.), Thanks in advance

    Read the article

  • Nagios DNX plugins

    - by danneh3826
    I'm toying with the idea of multiple Nagios instances setup to monitor our infrastructure. I've looked at all the various methods of distributed Nagios checks, and I think DNX comes out the closest. DNX handles failure of worker nodes, that's fine. What happens if the main DNX server fails though? Is there a way to replicate the server too? I'm using AWS EC2 primarily, so I can utilise Elastic Load Balancing for the web UI, but I need to be able to handle the AZ where the monitoring server is to fail over, and essentially for a second to pick up the checking load (active/passive, active/active, so long as it doesn't fail completely) The other thing I'm trying to solve is an issue with routing. What I'd like is to have multiple nodes report a fault before Nagios confirms it as critical. Not the NRPE checks, as they're pretty self explanitory, but things more like check_ping. I often have routing issues out of AWS to certain datacenters, so Nagios can often report bad/no ping/timeout as a critical issue, even though the machine in question is working fine. Would it be possible to have a setup where a worker complains a service check is critical, and have a second worker node (positioned in another datacenter/AZ) also report the service as critical before the Nagios central server issues a critical alert? I realise I might be asking a bit much (how far down the line do you go setting up failover systems before it starts to get ridiculous), however surely someone must have thought of this scenario when developing DNX?

    Read the article

  • Files showing in smbclient but not smbmount

    - by Staale
    I have a samba folder that I try and access through smbclient, and I can browse it just fine. However, mounting it through smbmount, all the folders under the share are empty. I can list the folders directly under the share fine, but they all appear empty. smbclient: # smbclient //server/share -U username -W workgroup password smbmount # sudo smbmount //server/share mntpoint -o user=username,workgroup=workgroup,password=password I have also tried with domain=workgroup instead of workgroup, both give the same result. No error messages, everything mounts fine, but all the folders under mntpoint are empty, despite the same folders being non-empty when using smbclient. Are these using different libraries? How can I debug the error? Additionally, if I try to mount //server/share/folder, doing an ls results in a segmentation fault. Using dmesg I find: kernel BUG at /build/buildd/linux-2.6.28/fs/cifs/cifs_dfs_ref.c:315! Full trace: http://pastebin.com/m70adc213 Using a credentials file, I first get empty dirs, then Resource temporarily unavailable. In my dmesg I see the following output: CIFS VFS: compose_mount_options: Failed to resolve server part of \\srv\share to IP: -11

    Read the article

  • How to get status code(409 for conflict) and message("your code is conflict") through falut event in

    - by Ankur
    I am calling a server method through HTTPService from client side and in response i am sending sending status code(409 for conflict) and message("your code is conflicted") from serverside on error but on client side in fault event i am getting status code 0 and message = "faultCode:Server.Error.Request faultString:'HTTP request error' faultDetail:'Error: [IOErrorEvent type="ioError" bubbles=false cancelable=false eventPhase=2 text="Error #2032: Stream Error. URL:" I want to know how to get the actual status code and message in fault event.

    Read the article

  • Giving the root user priority to maintain Debian (while server collapsing under heavy load)

    - by Saix
    Is there any way to setup Debian to prioritize any or specific root's activity before every other? For instance, several times per year something gets wrong (usually man's fault by overstressing apache/mysql) and system gets unresponsive under heavy load like 200 (8-core cpu). I know there are limits for php scripts to run then kill, but that's not the way because this limit has to be at least 45 minutes long. The problem is, until I'm able to login via SSH and let apache/mysql restart under this server stress, it nearly hits these 45 minutes anyway. Also hardware restart causing usually to run fsck at boot time on all harddrives since it's usually pretty long the box haven't been restarted. I was told it's really not good idea disabling fsck but then again, it takes more then hour to complete. What is the fastest way to restart apache/mysql? Is there any way to give ssh users or root user higher priority so the logging in and completing these restarts (rather stops though) commands wouldn't take so long? One comes to my mind.. use NICE for apache/mysql but no way. I can't risk limiting those two vital apps 24/7 or could I? I'm a little bit scared if any other system process wouldn't slow the pages down too much. Any backup process, swap (if any) etc. There is pretty heavy PHP framework with 20k visits a day, so it needs every hw/sw resource available. I can't throttle it the whole time, just in certain points when system gets unresponsive, so I could maintain it.

    Read the article

  • What causes a switch port to receive data not destined for it?

    - by user1693454
    We are having an intermittent fault which is effecting one of our control systems on one of our HP Procurve switches. For some reason, this PLC (10mbit port - 192.168.6.56) which is attached directly to the HP Switch intermittantly start's receiving data which is not destined for it. The data is being sent from a Thecus NAS with latest firmware (192.168.6.218) to a physical IBM Server running Win2003R2 and SAP (192.168.6.225). The problem does not just send to this server, it has been to other physical servers in the past too, but always from the Thecus NAS. I am using a monitor port to wireshark what is going in/out of the PLC - normally there would be about 1mb in/out per 2 or 3 minutes - only a server asking the state of the coils. When the problem occurs, there is a flood of data being put onto the PLC line - in this captured instance, about 67mb in less than a minute. Due to this, there is no way that the PLC can be queried as the port is effectively DOSed, in turn killing part of our factory. I know that having Production on the same vlan as IT is not a good idea - I agree, however it cannot be changed at the moment (will have to wait 3 months), as well as the problem has only started happening in the last 3 months. Here is a screen cap of one of the packets being sent from the Thecus NAS which was captured from the PLC port on the HP Switch: And there are over 700 of these in this one 1024kb file. If anyone has any idea on what could be going on, some help would be greatly appreciated. If you need to know anything more, let me know! Cheers!

    Read the article

  • Windows Server 2003 (w/Exchange) move to new machine

    - by James Booker
    I have an ageing domain controller (the only one on a 10-pc network) which needs rebooting often. I have a Dell Poweredge 2850 server doing nothing, so I'd like to move the DC to that, but here's the catch - I don't have Win2k Server Std install media any more as it's been lost. I purchased "Easus Todo Backup Advanced Server" which claims to be able to recover to dissimilar metal, but it's not quite working (although I don't think it's the product's fault) I know the server and PERC RAID card are good because I installed Ubuntu on the logical drive (4 x 72GB disks RAID 5) no problems. I've booted frmo the Easus Todo backup CD (which is WinPE based) and recovered to the logical disk on the RAID (after installing driver inside the WinPE environment from a NAS drive) The problem is when I boot the server, I can get the OS selection menu, but any option results in a blank screen, with no errors. I figure this is probably because the driver wasn't installed on the old machine (which is IDE-based (i know, i know!) and doesn;t have a RAID controller) I've booted from the CD and copied the mraid35x.sys file to the c:\windows\system32\drivers folder on the recovered system, but it makes no difference. I made a boot.ini with rdisks 0-10 defined, and booting from each of these resulted in a file error (i.e. 'this isn't a real disk') - the only disk that gets any response (the blank screen) is multi(0)disk(0)rdisk(0)partition(1) which just gives me the blank black screen and no disk activity. Is there any way I can force the drvier to be installed on the source system (so i can do a full backup again), i've tried right-clicking the oemsetup.inf and clicking install, but it didn't actually do anything. I attempted to force it with the 'Add new hardware' wizard and forcing with the 'have disk' option but it still gave me no hardware to select. Also I've got an identical machine running WinXP which uses the PERC driver successfully (which was obviously done at install time) and the boot.ini settings are the same : multi(0)disk(0)rdisk(0)partition(1) Any ideas would be appreciated.

    Read the article

  • IIS7ASP.Net 4.0 - 404 errors only for external clients

    - by dmcgiv
    recently we moved an ASP.Net 3.5 website to 4.0 (integrated mode) and when we deployed to the clients server (Windows Server 2008 Web edition) we notice that some .aspx pages are serving 404 errors. What is strange is that 1) the pages exist 2) if you browse from the server itself the page is served as normal, only external clients get the 404 3) it's the default 404 error page not the one configured in the web.config 4) it only happens for some .aspx pages, and I've not been able to establish a link between the pages that are not being served externally. We are using a URL rewriter module which I first thought may be at fault but then realised that only some of the failing pages are being rewritten. I've also tested removing the http module and the problem still persists. As everything is working as expected when logged onto the server I was thinking it my be some sort of permission issue, but why would it only affect a few pages? I turned on failed request tracking and the debug files are being generated with the expected 404 error, although at the moment I'm not sure what most of the data means so can't decipher what's going on internally. I'd really appreciate some help with this one.

    Read the article

  • SQL Server 2005 Disk Configuration: Single RAID 1+0 or multiple RAID 1+0s?

    - by mfredrickson
    Assuming that the workload for the SQL Server is just a normal OLTP database, and that there are a total of 20 disks available, which configuration would make more sense? A single RAID 1+0, containing all 20 disks. This physical volume would contain both the data files and the transaction log files, but two logical drives would be created from this RAID: one for the data files and one for the log files. Or... Two RAID 1+0s, each containing 10 disks. One physical volume would contain the data files, and the other would contain the log files. The reason for this question is due to a disagreement between me (SQL Developer) and a co-worker (DBA). For every configuration that I've done, or seen others do, the data files and transaction log files were separated at the physical level, and were placed on separate RAIDs. However, my co-workers argument is that by placing all the disks into a single RAID 1+0, then any IO that is done by the server is potentially shared between all 20 disks, instead of just 10 disks in my suggested configuration. Conceptually, his argument makes sense to me. Also, I've found some information from Microsoft that seems to supports his position. http://technet.microsoft.com/en-us/library/cc966414.aspx In the section titled "3. RAID10 Configuration", showing a configuration in which all 20 disks are allocated to a single RAID 1+0, it states: In this scenario, the I/O parallelism can be used to its fullest by all partitions. Therefore, distribution of I/O workload is among 20 physical spindles instead of four at the partition level. But... every other configuration I've seen suggests physically separating the data and log files onto separate RAIDs. Everything I've found here on Server Fault suggests the same. I understand that a log files will be write heavy, and that data files will be a combination of reads and writes, but does this require that the files be placed onto separate RAIDs instead of a single RAID?

    Read the article

  • How can records be deleted without activating the delete trigger?

    - by Servaas Phlips
    Hello there, Since about a month we are experiencing records that are disappearing from our database without any reason. (part of) Our database structure is at http://i.imgur.com/i15nG.png Now users and credentials can never be deleted. We noticed however that thanks to our backups that unfortanetely users disappeared from the database. The users and credentials that disappear appear to be completely random. In order to find out which application deletes this records we created triggers with the following checks: CREATE TRIGGER Credential_SoftDelete ON [Credential] INSTEAD OF DELETE AS DECLARE @message nvarchar(255) DECLARE @hostName nvarchar(30) DECLARE @loginName nvarchar(30) DECLARE @deletedId nvarchar(30) SELECT @deletedId=credentialid FROM deleted; SELECT @hostName=host_name,@loginName=login_name FROM sys.dm_exec_sessions WHERE session_id=@@SPID; SELECT @message = '[FAULT] Credential : ' + USER_NAME() + ' deleted ' +@deletedId + ' on ' + @@SERVERNAME + ' from [' + @hostname + ' by ' + @loginName; EXEC xp_logevent 50001,@message,ERROR GO Now after we added this trigger we hoped to find out which application deletes these credentials by searching in the log files. Unfortanetely the credentials are still deleted and the trigger Credential_SoftDelete is never logged. I did try run a delete on the database where the trigger is installed and where the users have disappeared. I ran the following query on the database: DELETE FROM [User] WHERE userid=296 and the trigger prevented deletion of this user and also logged this in the log events. This was actually on exact the same database where the users disappeared. (so no test copy or something like that) Please note that we also use replication, the type of replication we use is merge replication. How is this possible? Can the fact that we use replication on this database be the cause of this problem?

    Read the article

  • Website and file/directory permissions

    - by mathiass
    I've been given a task to fix this one website. One of its issues is that on one page, the images have broken links - the images are not showing, and clicking on the image (i.e. direct link to the image file) results in a 403 (Forbidden) error. I am looking for some feedback on what could be the possible cause. The directory where the images are stored has the following permissions: drwxrws--- www "group" 10240 Aug 2008 "image directory name" I had to hide the names. I checked the page source code, and everything seems to be in place. The rest of the site, and other images outside that image directory are showing fine. I was told that recently there have been some changes to the server. I'm trying to assume that there is no fault in the source code, and the permissions are - or used to be - correct (since the site has been working before, and no recent changes to the site itself have been made). My only thoughts at the moment is that either: a) the directory permission should be: drwxrws--x (executable) for the other users, or b) there is a change in the server settings that I don't know of. Is there anything else I should check?

    Read the article

  • How to fix "The connection was restarted"?

    - by Altar
    I have a php script with ajax. Few days ago, without making any changes, the script stopped working. The script is working sometimes but it is very slow. Other times it doesn't load completely. Yet other times it loads and empty page or it shows a "The connection was restarted" message. We have tried loading the page from 4 different computers in two different countries (two different ISPs). There is nothing relevant in the server's log. We contacted iPage.com hosting support. We sent screenshots. And all we manage to get from them was an incomplete screenshot and the message 'We tested. The page loads'. So, they won't admit the fault. It looks like they loaded the page once and declared it is working and went back to playing solitaire. I know their support. We had all kind of problems with iPage hosting. My questions are: 1. There is any way to make this error more easy to reproduce? I mean instead of fixing the error to get it appear more often. 2. There is any way to test a server response, to see if it drops connections?

    Read the article

  • Joomla Mailto Component unable to send mails occassionally

    - by kyle
    Greetings, I will certainly hope someone will be able to provide some enlightenment to my problem. Currently, I have 2 joomla sites, layout and menus are a replicate of the other. I noticed that on both Joomla, I will occasionally encounter "Unable to send mail" after a form submission. Is this the fault of my server, or the fault of Joomla's PHP Mailer ? I will certainly love to approach my hosting company for a solution but I do not want to place a false accusation on them.

    Read the article

  • Secure filesharing protocol for fileserver

    - by Hugo
    I'm setting up a fileserver, and I want lots of clients to easily access it. Up to now I've always used SSHFS to share between different PCs, but since I'm setting up a single fileserver, I'm looking for other common alternatives. Up to now I've seen: AFS: It seems it has no security, traffic is unencrypted, so it would require an SSH tunnel. If I'm to use SSH, I'd just use SSHFS. NFS: Same as above. Also, setting up the server is not so straighforward, it doesn't seem to be KISS enough - at least not for my liking. SMB: Same as AFS. It also seems not to be too well documented, and technically, seems a bit poor. It also seems the protocol isn't formally standardized. SSHFS has security, but as a downside, requieres every user to have an account on the server - there's no way to make a certain directory PUBLIC either. I don't think it has locking, and isn't very fault-tolerant. Are there any alternatives I've missing?

    Read the article

< Previous Page | 23 24 25 26 27 28 29 30 31 32 33 34  | Next Page >