Search Results

Search found 13454 results on 539 pages for 'ws security'.

Page 462/539 | < Previous Page | 458 459 460 461 462 463 464 465 466 467 468 469  | Next Page >

  • Anyone recommend a program to print multiple HTML files at once for end users?

    - by Keith Bentrup
    I have some clients with multiple html files in folders that are occasionally updated & printed. They would like to be able to print them all at once without having to open each one. I typically do this with a quick command for myself, but I'm unaware of any freeware to do this. After a google search, I'm not finding one, so I'm hoping someone can help. I'd rather not use a script to do this for various security/ease of use/familiarity reasons, I'd rather be able to just point to a simple program they can download and use on their windows desktops. Anyone know of one or some other easy solution to do this? Maybe I'm overlooking the obvious. If anyone's curious, this is what I do for myself (not for my clients): for %h in (*.html) do type "%h" >> all.htm then open all.htm & print. If I need a page break on each doc, I just search and replace in all.htm </body> with <p style="page-break-after:always">&nbsp;</p></body>. It's quick & simple, but too unfamiliar for them. Thanks!

    Read the article

  • MSSQL 2008 login failed for windows authentication

    - by Force Flow
    I'm running Microsoft SQL 2008 on a Windows 2008 Server. The MSSQL server server authentication is set to SQL Server and Windows Authentication mode. I have created an active directory security group "xyz app users". I have added a normal user (without any active directory admin privledges) and a user with domain admin privledges to the "xyz app users" group. I have added the group to the MSSQL management console as a login user. This group is a member of the public server role and is mapped to two databases. On a workstation, when the normal user is logged in, I configure a DSN ODBC connection, and I'm able to successfully create the DSN and test the SQL connection. However, when I'm logged in as the user with domain admin privledges, when I attempt to configure the DSN ODBC connection, I can't get past the login ID configuration screen. If I select "windows authentication" and click "next", I get an error: Connection failed: SQLState: '28000' SQL Server Error: 18456 [Microsoft][ODBC SQL Server Driver][SQL Server]Login failed for user 'mydomain\myuser' On the server's application event logs, this error appears: Login failed for user 'mydomain\myuser'. Reason: Token-based server access validation failed with an infrastructure error. Check for previous errors. [CLIENT: 172.x.x.x] And in MSSQL's event logs: Error: 18456, Severity: 14, State: 11 Solutions that I've seen so far do not seem to fit this situation (some solutions I've seen are only applicable when the BUILDIN\Administrator is being used locally on the server, which is not the case here).

    Read the article

  • Are my web server permissions for uploading correct?

    - by user1699176
    I'm on debian and I have my website in the directory /srv/www/mysite.com/public_html I set chown for www-data:www-data on /srv/www. I have root disabled and created a sudo user which is id 1000:1000. I would also like to use this user to upload to /srv/www so I added my sudo user to the www-data group. I originally got a message saying that I didn't have permissions to upload a file to that directory. After playing around with multiple permissions for a while I finally was able to upload properly, but I'm not sure if this set up is correct. I'm hesitant to change it for now since it actually works, so I thought I'd ask for advice. I think what I ended up doing was this: sudo chown -R www-data:www-data /srv/www sudo chmod g+s /srv/www sudo usermod -aG www-data myuser sudo chgrp -R www-data /srv/www sudo chmod -R g+w /srv/www When I was finally able to successfully upload a file (with FileZilla) it showed the owner as myuser myuser. Shouldn't it have been www-data myuser? My question is whether this is correct and if there are any potential security issues? For example, I wasn't sure if I was actually supposed to use "myuser" to own the /srv/www directory instead sudo chown -R myuser:myuser /srv/www or maybe sudo chown -R www-data:myuser /srv/www If you need more info, let me know, thanks.

    Read the article

  • Tunnell network requests with Windows 7

    - by mark
    I've Windows 7 64bit Pro client in a private LAN behind a Netgear wgr614v7 router. I've also a remote Debian server machine outside. I'd like to tunnel all (or specified ports/protocols) over this outside server, so when I'm on the Windows machine and I request serverfault.com it would not appear from the wgr614v7 public IP but from the server. But it's not only about HTTP traffic, it's basically about everything I'd like to: other TCP ports, even UDP, etc. It must be transparent to the application, e.g. they shouldn't be aware of this. All their requests just appear as being from the server and the tunnel between them takes care about the packets. I'm aware of e.g. Putty and forwarding individual ports or using it as a socks proxy, however not many applications to support this and the support in windows itself looks non-existent to me. I might add it should be something "reasonable" easy to set up. I've heard about PPTP but I'm unsure about it's security implications (by design). Should I go for VPN? There seem to be two common solutions for Linux (OpenSwan and StrongSwan), why would I pick the one over the other? I also fear that setting up a VPN might be quite complex, OTOH maybe it's the only sane way to do the things right? Or is OpenVPN sufficient? I'm seeking for open (source) solutions, what other options to I have or which direction should I head to?

    Read the article

  • Rsync root files between systems without specifying password

    - by xpt
    This seems very tricky to me. I've set up my two systems so that I can rsync files between them as me, without specifying password. Now the the problem is to rsync files that belong to root. On both of my systems, there are no root passwords. The only way to become root is via sudo. So I can neither give a password for sudo rsyn local root@remote:, no use my ssh-agent to supply pass phrase. I don't want to set up a root password on any systems; and I do need the files to be owned by root on both systems. EDIT: Using the files that belong to root is just an example, I need a way for my unprivileged account to read/write system (including root-owned) files easily. One example is to copy my configured /root environment into the freshly-installed system. The two systems are actually two VMs under a single host, so it's not a big concern for me to copy root-owned files between them. EDIT 2: If I only want to copy my configured /root environment into the freshly-installed system, I can use tar: sudo tar cvzf - /root | ssh me@remote sudo tar xvzf - -C / But I do need rsync to update from time to time. Any easy way to make it happen? EDIT 3: Formally formulate the question Alright, it all began with the question, how to rsync files that belong to root between two systems as a normal unprivileged user, without specifying password, under the condition that, The root account is locked on both of systems. I.e., there are no root passwords. The only way to become root is via sudo (recommended security practice, see http://help.ubuntu.com/community/RootSudo) I don't want a completely passwordless sudo but don’t want to be typing passwords all the time either. The normal unprivileged user has entered their ssh pass phrase into the ssh agent. Thanks

    Read the article

  • What ports, besides 80, need to be available to send (only send) email using phpmailer to gmail over SSL?

    - by Wobblefoot
    Using phpmailer I keep getting a 110 timeout and "Unable to connect to host" when sending email from my web server. The authentication details are right and they work on another server I have (login, pwd, ports etc and gmail acct set up for SSL connections on 465), but it's failing on my new server. FIREWALL: I allow related/established, port 80 and a port for SSH on INPUT, then this on OUTPUT: 7906 474K DROP tcp -- any any anywhere anywhere tcp dpt:smtp 0 0 ACCEPT tcp -- any any localhost.localdomain yw-in-f109.1e100.net tcp dpt:submission 0 0 ACCEPT tcp -- any any localhost.localdomain gx-in-f109.1e100.net tcp dpt:ssmtp 0 0 DROP tcp -- any any anywhere anywhere tcp dpt:submission 9 540 DROP tcp -- any any anywhere anywhere tcp dpt:ssmtp This output chain works on my other server and disabling it doesn't get mail delivered either. WEB SERVER: Varnish (80) Nginx (8088) Drupal 7 PHP5-FPM APC MySQL All works beautifully, except for outgoing email. What else could it be? I understand phpmailer does NOT require a local MTA or procmail (this is sort of the point - I don't want the security or admin overhead of a full blown MTA on my web server). Am I wrong? Do I need an MTA as well? What local ports and programs are used to authenticate over SSL and route mail using phpmailer? Any ideas at all greatly appreciated - wasted a day on this nonsense already!

    Read the article

  • Incorrect Internal DNS Resolution

    - by user167016
    I'm having a DNS issue. Server 2008 R2. The first clue was that after being off the network for a month, I could no longer Remote Desktop into my workstation by name, it wouldn't find it. Both via VPN and internally. But if I connect using its IP, that works. Now I notice in the server's Share and Storage Management, in Manage Sessions, it's displaying the incorrect computer name for some users. So I try, for one example: Ping -a 192.168.16.81 Pinging BOBS_COMPUTER.ourdomain.local [192.168.16.81] with 32 bytes of data: - replies all successful Then I try Ping RICHARDS_COMPUTER Pinging RICHARDS_COMPUTER.ourdomain.local [192.168.16.81] with 32 bytes of data: -all replies successful In DHCP, .81 belongs to RICHARDS_COMPUTER I did try flushdns. Not sure if this is related, apologies if it's not, but when I try to connect, I also get prompted: "The identity of the remote computer cannot be verified. Do you want to connect anyway? The remote computer could not be authenticated due to problems with its security certificate. It may be unsafe to proceed.." It then lists the correct name as the name in the certificate from the remote computer, but claims that the certificate is not from a trusted authority. Any thoughts are most appreciated!

    Read the article

  • Cannot connect to a shared network drive

    - by dublintech
    I am using windows 7, I cannot connect to a shared network drive on another machine. I can ping the machine. I can remote desktop connect to the machine. The machine is on the same subnet My friend with the exact same laptop as me (and on the same network, same workgroup) can connect to the shared folder. The machine I am trying to connect to and my friends machine can both see shared folders on my machine. I also cannot see shared folders on the friends laptop. When I select diagnose, windows tells me nothing useful. When I select see details on the error pop up, I see: Error code: 0x80004005 (google doesn't help much) I can nbtstat -a the machine who has the shared folder. When I try with my firewall turned off the same happens. I have ensured my windows 7 has all updates. I run security essentials to ensure my laptop is clean. I run ccleaner to clean up my registry. Same error. I have tried with my laptop on both wireless and ethernet. As you can imagine, I am banging my head against the wall on this one.

    Read the article

  • Moving an external hard drive while running

    - by user1108939
    I mean physically moving the drive around. I've never dealt with external hard drives before. Just plugged this wd mypassport to test the transfer rate. At one point I 'safely ejected' the drive. A minute later I decide to check the underside of the drive, not realizing the disk is still spinning. I lift the drive, rotating my writs about 70 degrees to the left... I hear a sequence of three high pitched sounds. I couldn't determine whether that was an indication beep by an internal security feature or the head scratching the plate (oh god...). Drive stops and usb power is disconnected. I reconnect it - it shows up fine - reads/writes. The drive was not reading/writing when i moved it. Did I damage my drive? Are these things that fragile? I thought them to be at least as durable as a standard 2.5" internal drive. Am I mistaken?

    Read the article

  • What Logs / Process Stats to monitor on a Ubuntu FTP server?

    - by Adam Salkin
    I am administering a server with Ubuntu Server which is running pureFTP. So far all is well, but I would like to know what I should be monitoring so that I can spot any potential stability and security issues. I'm not looking for sophisticated software, more an idea of what logs and process statistics are most useful for checking on the health of the system. I'm thinking that I can look at various parameters output from the "ps" command and compare to see if I have things like memory leaks. But I would like to know what experienced admins do. Also, how do I do a disk check so that when I reboot, I don't get a message saying something like "disk not checked for x days, forcing check" which delays the reboot? I assume there is command that I can run as a cron job late at night. How often should it be run? What things should I be looking at to spot intrusion attempts? The only shell access is SSH on a non-standard port through UFW firewall, and I regularly do a grep on auth.log for "Fail" or "Invalid". Is there anything else I should look at? I was logging the firewall (UFW) but I have very few open ports (FTP and SSH on a non standard port) so looking at lists of IP's that have been blocked did not seem useful. Many thanks

    Read the article

  • Passive mode FTP file download hangs from specific machine

    - by chiptuned
    I have a server which is an AWS instance that just cannot download files from a specific FTP server. I can connect to the FTP server fine and run some commands, but when I request a file it just hangs. Here is the debug output of the base linux ftp client after login: ---> SYST 215 UNIX Type: Apache FtpServer Remote system type is UNIX. ftp> get outgoing/catalog.gz catalog.gz local: catalog.gz remote: outgoing/catalog.gz ---> PASV 227 Entering Passive Mode (64,156,167,125,135,191) ---> RETR outgoing/catalog.gz 150 File status okay; about to open data connection. Thats it. Then it just sits there and nothing transfers. I have verified that a data connection is made but the client gets no data. ? ss -nt dst 64.156.167.125 State Recv-Q Send-Q Local Address:Port Peer Address:Port ESTAB 0 0 10.185.147.150:41190 64.156.167.125:21 ESTAB 0 0 10.185.147.150:48871 64.156.167.125:48557 The FTP server is not in my control and downloads from other FTP servers in passive mode have worked. Active mode does not work as the system is behind a firewall. Every FTP client I've tried has the same problem. The download works from other systems, even from other AWS instances I have with the same Security Group. Not necessarily the same distro or config though. I understand it may be some issue on the server side, but I want to know what it is about my particular machine where the transfer hangs and where on every other machine I can get my hands on, it works. Please let me know what the culprit on the client side could be or ideas on what else to look at.

    Read the article

  • Connect from Mac OS X to Windows 7 Desktop

    - by jrn
    I am trying to connect from my MacBook to my Windows 7 machine within my own network - if it will work from outside my network that's a plus but no need to have. My Windows 7 machine is freshly installed with Windows 7 Home Premium. It runs the built-in firewall with no settings changed so far as well as Microsoft Security Essentials. So far I tried CoRD and Microsofts Remote Desktop Connections to connect from my Mac to my Windows machine without any success. I did try and disabled the firewall on my Windows machine but could not connect either. The reason I did this was to check wether there is a Windows firewall setting preventing me from connecting. On top of that I manually started the Remote Desktop Services and Remote Desktop Configuration within services.msc. Is there anything else I have to enable for a remote desktop connection? Could there be any router setting I have to tweak? Since I do not want to connect from outside my own network I thought I don't have to do any port forwarding. The error messages I retrieve are all connection timeouts. I can however ping the hostname and/or IP address. Any help would be greatly appreciated. Thanks a lot, jrn

    Read the article

  • SMTP Server setting on Windows 2008 R2

    - by user223298
    I am very very new to this and just trying to configure SMTP virtual server. I have followed a few threads to get it all running, but the mails are not being delivered. What I have done so far - 1) Install SMTP server. 2) SMTP server Properties General Tab - IP address is set to 'All Unassigned'. Access Tab - Authentication is anonymous access. Everything else is left to Default settings. Delivery Tab - Outbound security is anonymous access. In Advance section, entered the domain name in the FQDN field, and localhost in Smart host field. 3) Created an Inbound Rule for SMTP service to allow connections to Port 25. When I try to telnet, everything works up until the point the mail has to be send. Now, the sender's domain is different to the receiver's domain. Not sure if settings have to be changed to allow that? I had set the Relay restrictions on SMTP server, but because I couldn't send the mails, I thought I might as well make it work without the relay first. The error I see while sending the mail is 451 Timeout waiting for client input. I used to get some other error before when I had Relay restrictions on. Can anyone please point me in the right direction? Please let me know if you need more information. Thanks.

    Read the article

  • Is it safe to format this partition?

    - by xanesis4
    On a ubuntu server I own, I am running out of space. When I ran sudo parted /dev/sda -l to find all available drives, I got this: Model: ATA ST31000528AS (scsi) Disk /dev/sda: 1000GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 1049kB 256MB 255MB primary ext2 boot 2 257MB 1000GB 1000GB extended 5 257MB 1000GB 1000GB logical lvm Model: Linux device-mapper (linear) (dm) Disk /dev/mapper/server--vg-swap_1: 2135MB Sector size (logical/physical): 512B/512B Partition Table: loop Number Start End Size File system Flags 1 0.00B 2135MB 2135MB linux-swap(v1) Model: Linux device-mapper (linear) (dm) Disk /dev/mapper/server--vg-root: 998GB Sector size (logical/physical): 512B/512B Partition Table: loop Number Start End Size File system Flags 1 0.00B 998GB 998GB ext4 I understand /dev/mapper/server--vg-root is the filesystem, and /dev/sda1 has some stuff related to GRUB. But, what about /dev/sda2 and /dev/sda5? When I tried to mount /dev/sda2, it said that I needed to specify the file system, which according to the table, is nonexistent. So, is it safe to format this with, say ext4 and mount it? Also, when I tried to mount /dev/sd5, it gave me this error: mount: unknown filesystem type 'LVM2_member' I assume it is NOT save to reformat this. If I'm wrong, then that would be great, because I could save some space. Please let me know either way. Thanks in advance! UPDATE: Here is the result of mount: /dev/mapper/server--vg-root on / type ext4 (rw,errors=remount-ro) proc on /proc type proc (rw,noexec,nosuid,nodev) sysfs on /sys type sysfs (rw,noexec,nosuid,nodev) none on /sys/fs/fuse/connections type fusectl (rw) none on /sys/kernel/debug type debugfs (rw) none on /sys/kernel/security type securityfs (rw) udev on /dev type devtmpfs (rw,mode=0755) devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620) tmpfs on /run type tmpfs (rw,noexec,nosuid,size=10%,mode=0755) none on /run/lock type tmpfs (rw,noexec,nosuid,nodev,size=5242880) none on /run/shm type tmpfs (rw,nosuid,nodev) /dev/sda1 on /boot type ext2 (rw,acl) /dev/sda1 on /media/hd2 type ext2 (rw)

    Read the article

  • Windows Server 2008 - Non-Domain users can see my server shares

    - by ManovrareSoft
    Windows Server 2008 - Server Machine Windows 7 Professional - Client Machine I have a domain. It was setup by the client. The shares on the server are restricted correctly when a user logs on to the domain and uses their workstation, I have a few groups setup to restrict some access but the groups are at their core "Domain Users". The problem I am having is that when a user brings in a laptop with Windows 7 Pro on it, they can type up the name of the server in the "Run Dialog" on the start menu like "\SERVERNAME\" and access all of the shares freely... because they are not logged in to the domain there are no restrictions it seems.I have reviewed the permissions on the folders and they all have to be "Domain Users" and I have removed "Everyone" from the list of people able to see it. Guest access is also disabled...What am I doing wrong? Only group in the list is "Domain Users" isn't a domain user a user that is logged in to the domain? How do I stop non-domain users from seeing the shared folder? I noticed this on Windows Server 2003 too at another time. I assume they both had similar security issues and neither were set up by myself so I am not sure what could have been enabled or specifically deactivated that makes this issue appear.

    Read the article

  • Where does Firefox store cerificates and how to delete one?

    - by majid4466
    Hi all, The root cause of my problem is not known to me, whatever it is, I experience frequent DNS failures. When it happens I cannot browse to my Gmail inbox. I use two DNS settings. One is the public DNS server offered by OpenDNS, and the other is Google's free DNS server. When this happens I switch from the active setting to the other one and the problem goes away. But there is a side effect to this. When browsing to Gmail fails to load, after switching the DNS I receive an error saying the security certificate the site uses is only valid for OpenDNS. This my wild guess at what is going on: OpenDNS fails to resolve mail.google.com to its IP, My ISP sends me a page showing search results for 'mail.google.com' Since I have received some sort of page instead of a timeout, the browser, mistakenly, binds the certificate it has cached for 'mail.google.com' to the new domain. This search page is not served by https so not exception is thrown by the wrong binding After switching the DNS, the domain is correctly resolved to Gmail server's IP and since his is on https the handshake is triggered. Now, because of the wrong binding, which passed quietly as no handshake was involved, I receive the error saying the certificate used by 'mail.google.com' is only good for openDNS I don't know much about DNS, less about https and the process of establishing a secure connection. How correct is my explanation? How can I delete the wrong association and/or the certificate? Thanks for listening. P. S. The problem goes away by itself, but sometimes it takes several hours before Gmail works again.

    Read the article

  • Where does Firefox store cerificates and how to delete one?

    - by majid4466
    Hi all, The root cause of my problem is not known to me, whatever it is, I experience frequent DNS failures. When it happens I cannot browse to my Gmail inbox. I use two DNS settings. One is the public DNS server offered by OpenDNS, and the other is Google's free DNS server. When this happens I switch from the active setting to the other one and the problem goes away. But there is a side effect to this. When browsing to Gmail fails to load, after switching the DNS I receive an error saying the security certificate the site uses is only valid for OpenDNS. This my wild guess at what is going on: 1. OpenDNS fails to resolve mail.google.com to its IP, 2. My ISP sends me a page showing search results for 'mail.google.com' 3. Since I have received some sort of page instead of a timeout, the browser, mistakenly, binds the certificate it has cached for 'mail.google.com' to the new domain. This search page is not served by https so not exception is thrown by the wrong binding 4. After switching the DNS, the domain is correctly resolved to Gmail server's IP and since his is on https the handshake is triggered. 5. Now, because of the wrong binding, which passed quietly as no handshake was involved, I receive the error saying the certificate used by 'mail.google.com' is only good for openDNS I don't know much about DNS, less about https and the process of establishing a secure connection. How correct is my explanation? How can I delete the wrong association and/or the certificate? Thanks for listening. P. S. The problem goes away by itself, but sometimes it takes several hours before Gmail works again.

    Read the article

  • PPTP VPN on Server 2008 Enterprise

    - by Mike K
    I asked this question on Server fault and was told that was not allowed so im moving it here. I am running Windows Server 2008 enterprise in my HOME network inside of vmware workstation. I am running this on my home network to setup a PPTP VPN connection at home. I have correctly setup everything I needed to make it work, including opening all the ports, 1723 and 43 (GRE). I am able to connect just fine, but when I connect I dont have internet unless I uncheck use remote gateway. The thing is, I want to use the remote gateway to route all my traffic through that connection. Can someone tell me why this isnt working and how to get it to work. When I have remote gateway checked, and I do an ipconfig I dont get a remote gateway for the VPN connection, its 0.0.0.0 when id assume if connected properly should be 192.168.1.254 (my ATT Home Router). Also, if I cant get the remote gateway issue to work, and I have to uncheck that box to get internet, does this mean my VPN session is no longer encrypted? I am fully aware the PPTP VPN is the weakest VPN encryption out there but still having that extra layer of security when im on an unsecure wifi connection makes me feel a bit better. Thank you for all your help in advance. Someone told me I need to setup a gateway or router configured on the server. If thats the case, how go I go about telling the remote co

    Read the article

  • In Windows 7 is there a way to login from any user account and see the same workspace and be able to use the running programs of another user?

    - by WickedMongoose
    Our group has a number of Test Stands with PCs that are currently being accessed with a single group login. It has been sent from on high that this is not the way to do things for security reasons and we all agree. However. Multiple team members from around the world log into these Test Stands and need to be able to access programs that have been run from what would be different user profiles if we were to no longer have a single common login. Is there a way to have a common workspace such that when different users login, they will be able to see and interact with all running applications as if they were using a common login? Applications that we run link to and monopolize hardware resources connected to the PC and it is time consuming to restart and reload settings every time a new user logs in. Even if the program did not monopolize the hardware many of these programs are resource intensive and require a large portion of each machine's RAM to run, so trying to run the application again when it is already running from multiple user accounts would quickly consume all system resources. Simple Example: I open a chrome browser while logged into our pc. I then logout and another team member remotes in and should be able to see my open browser and be able to interact with it as if he were the one who opened it. Any alternative process flows or solutions from someone who has gone through a similar transition would be appreciated. This is not a request for how to give all users access to the ability to run a program, but it is the request for how to allow all users access to interact with running applications that have been started by other users and need to be interacted with as if the new user started and has control of the application.

    Read the article

  • imagecreatefromjpeg() stops working after server upgrade

    - by John Conde
    We have a server located at a local company's place of business running Solaris/Apache/PHP. They recently did an update to Solaris, Apache, and PHP (security update patches, etc.). Unfortunately it has caused the image manipulation portion of our software to break. imagecreatefromjpeg() is now generating the following error: Warning: imagecreatefromjpeg() [function.imagecreatefromjpeg]: '/path/to/file/filename.jpg' is not a valid JPEG file in /path/to/file/Image.class.php on line XX No PHP code was changed during the server upgrade and it was fully functional before the software upgrades. I checked the files being passed to imagecreatefromjpeg() and they are indeed valid (they open successfully in both image editing software and in my browser). I checked the permissions of the directory from which the files are being opened and they do have read permission. GD library is enabled. I'm not sure what else I can check. Based on the scenario above I am guessing something changed in the software but I don't know what it could be. PHP was version 5.2.5 and is now 5.2.13. I appreciate any guidance as to what could be cause of this issue.

    Read the article

  • Debian 7 and PHP 5.4.4 error reporting

    - by milovan
    I use default php.ini and then in my PHP script (local.settings.php in Drupal) I simply set ini_set('error_reporting', 'E_ALL & ~E_NOTICE & ~E_STRICT'); According to documentation this means "show all messages minus notice and strict warnings". But in my case it still shows strict warnings! I have no idea why, because I clearly stated "~E_STRICT". If I comment it out then I see strict warnings. So it means that default from php.ini "E_ALL & ~E_DEPRECATED & ~E_STRICT" didn't do its job as it also has "~E_STRICT" but I still see strict warnings. On Debian 6 there was Suhoshin patch which was controlling usage of php_ini in PHP scripts. Especially when you try to get more memory than defined cap. Now on debian 7 there is no Suhoshin nor any other security element that might control php_ini. So what might cause php_ini not to be executed? Is there some new variable / setup / other that needs to be checked?

    Read the article

  • Access denied when trying to open files/folders after reinstall [closed]

    - by user711532
    Possible Duplicate: Access Denied when saving a file in Windows 7 I installed Windows 7 fresh on a new machine. Now when I unarchive (winrar or 7z etc. ) to Program files (x86) (for example), access denied. Even if I copy a file to a folder I installed an app to it is still access denied. I checked the security, it looks like full control is given to the creator - this is weird as I never ran across this before (same version of Windows 7 - its just a fresh install after some new hardware). It is the same effect as if I was editing the hosts file, and you do not use "run as admin" you will not be able to save it, yo will have to save it somewhere else. This "file copy" issue I ran into is the same. I could change all these permissions, however this is something I never had to do before. I am the admin, why did the install, not give me "full control"? How can this be globally fixed. I cannot change the permissions - they are greyed out - so that is weird as well. If I was a standard reason, It would make sense, however, again, I am the admin.

    Read the article

  • BKF file corruption

    - by Naitik Semwaal
    I don't wanna ask anything here as I have nothing to ask. Instead of that, if I share some useful info here, would you guys mind? If not, then let me proceed. You must have heard about "Back up", the process in which we create backup copies of our crucial data into a file, called BKF (backup) file. Having a valid BKF file, provides security to our data against unwanted data loss or corruption. Whenever such a critical situation takes place, we can restore our BKF file and get our data back (but only if backed up earlier). Do you guys ever thought that why a BKF file gets corrupted? What could be the reasons which make the BKF file corrupted or inaccessible? One day while googling, I found a blog post named as: Reasons of BKF file corruption. I read it, it was very informative. In this blog, I came to know about the reasons for corruption in BKF files. I shared the blog here so that users can read it and clear their doubts of BKF file corruption. I hope this would be helpful.

    Read the article

  • Provide a user with service start/stop permissions

    - by slakr007
    I have a very basic domain that I use for development. I want to create a GPO that provides users in the Backup Operators group with start/stop permissions for two specific services on a specific server. I have read several articles about this, and they all indicate that this is very easy. Create a GPO, give the user start/stop permissions to the services under Computer Configuration Policies Windows Settings Security Settings System Services, and voila. Done. Not so much, but I have to be doing something wrong. My install is pretty much the default. The domain controller is in the Domain Controllers OU, the Backup Operators group is under Builtin, and I created a user called Backup under Users. I created a GPO and linked it to the Domain Controllers OU. In the GPO I give the Backup user permission to start/stop two specific services on the server. I forced an update with gpupdate. I used Group Policy Results to verify that my GPO is the winning GPO giving the user the permission to start/stop the two services. However, the user is still unable to start/stop the services. I attempted different loopback settings on the GPO to no avail. I'm sort of at a loss here.

    Read the article

  • Reporting SQL Vulnerability [migrated]

    - by Ciaran87Bel
    My first post here so i'll hopefully keep it simple. I have just finished building a CMS targeted at a certain industry and built a test site to see how everything works. Anyway I wrote a program to check for sql injection vulnerabilities and the program followed a blog link to an external website. The program discovered that the external site had a massive vulnerability that left it open to practically anyone who could then access every bit of data on their MYSQL Server and run queries etc. The thing is this external site is the brand leader in their industry and do millions upon millions of sales per annum. I have tried contacting them to let them know and even went as far as contacting the company that built their platform but I was pretty much brushed off and haven't heard back from them. Their database would contain the details of hundreds of thousands of customers and all their data. I could easily make myself site admin etc in a few seconds but they won't listen to me even though I have offered to share the vulnerability with them and help in anyway I can. Is there anything else I can do because it is one of the biggest security risks I have ever personally come across. Is there any other steps I should take to report this? Thanks

    Read the article

< Previous Page | 458 459 460 461 462 463 464 465 466 467 468 469  | Next Page >