Search Results

Search found 35340 results on 1414 pages for 'policy based management'.

Page 111/1414 | < Previous Page | 107 108 109 110 111 112 113 114 115 116 117 118  | Next Page >

  • SCCM SP2 - OOB Management Certificates Problems

    - by Achinoam
    Hi experts, I have a vPro client computer with AMT 4.0. It was importeed successfully via the Import OOB Computers wizard, and after sending a "Hello- packet" it became provisioned. (The SCCM GUI displays AMT Status: Provisioned). But when I try to perform power operations on this machine, they always fail with the following lines in the log: AMT Operation Worker: Wakes up to process instruction files 7/29/2009 10:59:29 AM 2176 (0x0880) AMT Operation Worker: Wait 20 seconds... 7/29/2009 10:59:29 AM 2176 (0x0880) Auto-worker Thread Pool: Work thread 3884 started 7/29/2009 10:59:29 AM 3884 (0x0F2C) session params : https:/ / amt4.domaindemo.com:16993 , 11001 7/29/2009 10:59:29 AM 3884 (0x0F2C) ERROR: Invoke(invoke) failed: 80020009argNum = 0 7/29/2009 10:59:31 AM 3884 (0x0F2C) Description: A security error occurred 7/29/2009 10:59:31 AM 3884 (0x0F2C) Error: Failed to Invoke CIM_BootConfigSetting::ChangeBootOrder_INPUT action. 7/29/2009 10:59:31 AM 3884 (0x0F2C) AMT Operation Worker: AMT machine amt4.domaindemo.com can't be waken up. Error code: 0x80072F8F 7/29/2009 10:59:31 AM 3884 (0x0F2C) Auto-worker Thread Pool: Warning, Failed to run task this time. Will retry(1) it 7/29/2009 10:59:31 AM 3884 (0x0F2C) After investigation, I've seen that the problem occurs already on the 2nd stage of the provisioning: Start 2nd stage provision on AMT device amt4.domaindemo.com. 8/2/2009 4:55:12 PM 2944 (0x0B80) session params : https: / / amt4.domaindemo.com:16993 , 11001 8/2/2009 4:55:12 PM 2944 (0x0B80) Delete existing ACLs... 8/2/2009 4:55:12 PM 2944 (0x0B80) ERROR: Invoke(invoke) failed: 80020009argNum = 0 8/2/2009 4:55:14 PM 2944 (0x0B80) Description: A security error occurred 8/2/2009 4:55:14 PM 2944 (0x0B80) Error: Cannot Enumerate User Acl Entries. 8/2/2009 4:55:14 PM 2944 (0x0B80) Error: CSMSAMTProvTask::StartProvision Fail to call AMTWSManUtilities::DeleteACLs 8/2/2009 4:55:14 PM 2944 (0x0B80) Error: Can not finish WSMAN call with target device. 1. Check if there is a winhttp proxy to block connection. 2. Service point is trying to establish connection with wireless IP address of AMT firmware but wireless management has NOT enabled yet. AMT firmware doesn't support provision through wireless connection. 3. For greater than 3.x AMT, there is a known issue in AMT firmware that WSMAN will fail with FQDN longer than 44 bytes. (MachineId = 17) 8/2/2009 4:55:14 PM 2944 (0x0B80) STATMSG: ID=7208 SEV=E LEV=M SOURCE="SMS Server" COMP="SMS_AMT_OPERATION_MANAGER" SYS=JE-DEV-MS0 SITE=JR1 PID=1756 TID=2944 GMTDATE=Sun Aug 02 14:55:14.281 2009 ISTR0="amt4.domaindemo.com" ISTR1="amt4.domaindemo.com" ISTR2="" ISTR3="" ISTR4="" ISTR5="" ISTR6="" ISTR7="" ISTR8="" ISTR9="" NUMATTRS=0 8/2/2009 4:55:14 PM 2944 (0x0B80) This error is consistent with all the other 2nd stage provisioning tasks. (Add ACLs, Enable Web UI, etc.) I've opened the certification authority, and I see that the certificates were issued to the SCCM Site server instead of the AMT client! What could be the reason for this failure? What is the problematic definition for the certificate? Thank you in advance!!!

    Read the article

  • TCP Server Memory management: #Connections Vs. #Requests

    - by Andrew
    Given that, there is no theoretical limit to number of concurrent TCP connections a Windows 2008 server can handle. Only thing will happen is, with each connection there will be memory consumption in server. Unfortunately, memory is not unlimited (and I want to utilize only physical memory). For example, lets say we've 2GB server memory. Now there are two extreme cases: Case 1: If we've allocated 64KB buffer for each connection (only to receive incoming request), then 32768 connections can consume all the 2GB of memory. This will not leave any memory to queue/process incoming requests from those connections. Case 2: On the other hand, lets say a single (or very few) connections continuously keeps sending request buffers (for example, video streaming from one connection to other) and server cannot process them within time, those buffers will get piled up in server and eventually will occupy most of the servers memory. And it will not leave any memory for new connection thereafter. This is the real dilemma in server design bugging me badly for last many days. If I can decide on max size of request buffer per connection and max number of requests to allow in queue per connection. Then, based on available server memory, it will then automatically set limit on max number of concurrent connections. How to decide on these limits to achieve best performance and throughput? I am just looking for perfect utilization of server resources. Are there any standard guidelines or empirical data available with someone who can share with me please.

    Read the article

  • SCCM SP2 - OOB Management Certificates Problems

    - by Achinoam
    I have a vPro client computer with AMT 4.0. It was importeed successfully via the Import OOB Computers wizard, and after sending a "Hello- packet" it became provisioned. (The SCCM GUI displays AMT Status: Provisioned). But when I try to perform power operations on this machine, they always fail with the following lines in the log: AMT Operation Worker: Wakes up to process instruction files 7/29/2009 10:59:29 AM 2176 (0x0880) AMT Operation Worker: Wait 20 seconds... 7/29/2009 10:59:29 AM 2176 (0x0880) Auto-worker Thread Pool: Work thread 3884 started 7/29/2009 10:59:29 AM 3884 (0x0F2C) session params : https:/ / amt4.domaindemo.com:16993 , 11001 7/29/2009 10:59:29 AM 3884 (0x0F2C) ERROR: Invoke(invoke) failed: 80020009argNum = 0 7/29/2009 10:59:31 AM 3884 (0x0F2C) Description: A security error occurred 7/29/2009 10:59:31 AM 3884 (0x0F2C) Error: Failed to Invoke CIM_BootConfigSetting::ChangeBootOrder_INPUT action. 7/29/2009 10:59:31 AM 3884 (0x0F2C) AMT Operation Worker: AMT machine amt4.domaindemo.com can't be waken up. Error code: 0x80072F8F 7/29/2009 10:59:31 AM 3884 (0x0F2C) Auto-worker Thread Pool: Warning, Failed to run task this time. Will retry(1) it 7/29/2009 10:59:31 AM 3884 (0x0F2C) After investigation, I've seen that the problem occurs already on the 2nd stage of the provisioning: Start 2nd stage provision on AMT device amt4.domaindemo.com. 8/2/2009 4:55:12 PM 2944 (0x0B80) session params : https: / / amt4.domaindemo.com:16993 , 11001 8/2/2009 4:55:12 PM 2944 (0x0B80) Delete existing ACLs... 8/2/2009 4:55:12 PM 2944 (0x0B80) ERROR: Invoke(invoke) failed: 80020009argNum = 0 8/2/2009 4:55:14 PM 2944 (0x0B80) Description: A security error occurred 8/2/2009 4:55:14 PM 2944 (0x0B80) Error: Cannot Enumerate User Acl Entries. 8/2/2009 4:55:14 PM 2944 (0x0B80) Error: CSMSAMTProvTask::StartProvision Fail to call AMTWSManUtilities::DeleteACLs 8/2/2009 4:55:14 PM 2944 (0x0B80) Error: Can not finish WSMAN call with target device. 1. Check if there is a winhttp proxy to block connection. 2. Service point is trying to establish connection with wireless IP address of AMT firmware but wireless management has NOT enabled yet. AMT firmware doesn't support provision through wireless connection. 3. For greater than 3.x AMT, there is a known issue in AMT firmware that WSMAN will fail with FQDN longer than 44 bytes. (MachineId = 17) 8/2/2009 4:55:14 PM 2944 (0x0B80) STATMSG: ID=7208 SEV=E LEV=M SOURCE="SMS Server" COMP="SMS_AMT_OPERATION_MANAGER" SYS=JE-DEV-MS0 SITE=JR1 PID=1756 TID=2944 GMTDATE=Sun Aug 02 14:55:14.281 2009 ISTR0="amt4.domaindemo.com" ISTR1="amt4.domaindemo.com" ISTR2="" ISTR3="" ISTR4="" ISTR5="" ISTR6="" ISTR7="" ISTR8="" ISTR9="" NUMATTRS=0 8/2/2009 4:55:14 PM 2944 (0x0B80) This error is consistent with all the other 2nd stage provisioning tasks. (Add ACLs, Enable Web UI, etc.) I've opened the certification authority, and I see that the certificates were issued to the SCCM Site server instead of the AMT client! What could be the reason for this failure? What is the problematic definition for the certificate? Thank you in advance!!!

    Read the article

  • Best console based text editor not only for programmers [closed]

    - by robo
    I need console based text editor for writing both source codes and human readable texts such as emails. I need it to be user friendly. It mean for me: You can use it the same way as the notepad or gedit. You can use mouse there. If you need your mother of girlfriend or somebody to edit your text they will know what to do, they will not realize it is a console and will have only a feeling it is something like a notepad. copy, paste, undo works as usual with usual key combinations (Ctrl-C, Ctrl-V, Ctrl-Z). shift and arrows works as usual. They select the text. And when I return to the computer I want to use the text editor for programming. I expect: Syntax highliting auto indenting replacing spaces with tabs keyboard shortcuts for compiling possibility to configure it to use a debugger autocompletions for c#, java, c++ and other languages other things I expect from IDE's. I was working and configuring vim for a few years. But It never fulfilled all of my expectations (but it almost did). I thing I could get vim configured perfectly if I had few more weeks time for configurating it. Unfortunately I cannot afford to be configuring vim forever. Is there other alternative? Hopefully some editor I once set up and it will works forever? What do you use? I often hear people are using emacs. Is it worth learning?

    Read the article

  • apache adress based access control

    - by stijn
    I have an apache instance serving different locations, eg https://host.com/jira https://host.com/svn https://host.com/websvn https://host.com/phpmyadmin Each of these has access control rules based on ip adres/hostname. Some of them use the same configuration though, so I have to repeat the same rules each time: Order Deny,Allow Deny from All Allow from 10.35 myhome.com mycollegueshome.com Is there a way to make these reusable so that I don't have to change each instance everytime something changes? Ie, can I write this once, then use it for a couple of locations? Using SetEnvIf maybe? It would be nice if I could do something like this pseudo-config: <myaccessrule> Order Deny,Allow Deny from All Allow from 10.35 myhome.com mycollegueshome.com </myaccessrule> <Proxy /jira*> AccessRule = myaccessrule </Proxy> <Location /svn> AccessRule = myaccessrule </Location> <Directory /websvn> AccessRule = myaccessrule </Directory>

    Read the article

  • How to setup NTFS ACL with Acces Based Enumeration

    - by Patrick Pellegrino
    We're in the process of migrating from Novell Netware to Windows 2K8 R2 infrastructure (AD, File server, print server... etc) My question is about ACL. While Netware and Windows are totally different, I want to be sure my thnking is good before screwing everything up! There's a scenario : F: | +-- DATA <= Shared as DATA with Access based enumeration | +-- Folder 1 +-- Team 1's Folder +-- Team 2's Folder ... In that case, by default, rights are herited from the F: to the deepest folders. What we want : Administrators group have full control top - down. From DATA, ABE list only folders that users have access. (ex. : I'm in group Team 2, I see Team 2's Folder). From what I understand, at DATA I remove all NTFS ACL to be herited (ex. Users Group), be sure to keep Administrators Group and SYSTEM user. After that, grant Full control (or any right needed) on each folder to Groups or Users that have to have access. Does I'm wrong ? Anything I should take care of ? Any help to my understanding will be very appreciated. Regards.

    Read the article

  • Need help finding a program to split PDFs based on text in specific areas

    - by Sean
    I reeeeally need help with this. I work for a University in the admissions office and we get large PDFs of every document that an applicant uploads to us. What I need to do is split these PDFs into separate documents. Each of the separate documents has a head at the top of what type of document it is (Statement of Purpose, Transcript, etc). I found a program a few weeks ago that at first seemed to work great (A-PDF Splitter). It would look for every type of document in a combined PDF (eg. Statement of Purpose), and if it saw that heading it would create a new PDF named "ORIGINALFILENAME-Statement of Purpose". I soon discovered though that the program breaks for no reason on certain PDFs, and I have to take the time to take that PDF out of the queue and start the splitting over again (and we get 250 a day). I contacted their support and they basically told me I was SOL. So please, if you can find me a program to split PDFs into smaller PDFs based on if it finds particular text in a region, then I would be forever in dept to you. If I don't find one soon then I'm going to be spending the entire day splitting PDFs by hand and my boss isn't going to be happy.

    Read the article

  • Using udev to create a character device based on a driver being loaded

    - by SteveCB
    I'm in the process of setting up RAID monitoring for a number of Dell servers that use the PERC 6i integrated card. We're using Nagios at present and the check_megasasctl plugin seems to fit the bill. However, the plugin relies upon the existence of: /dev/megaraid_sas_ioctl_node This device node doesn't exist by default, you have to create it by hand using something like: mknod /dev/megaraid_sas_ioctl_node c 253 0 Now, to make the existence of this device node persistent across reboots, I thought I could write a udev rule, but as usual, I'm missing something. I thought I could create a file such as /etc/udev/rules.d/10-local/rules that contained: DRIVER=="megasas" NAME="megaraid_sas_ioctl_node" MODE="0600" But this doesn't work - no device node after a reboot. Dmesg output indicates the megasas driver is loaded and functional: megasas: 00.00.04.01-RH1 Thu July 10 09:41:51 PST 2008 megasas: 0x1000:0x0060:0x1028:0x1f0c: bus 1:slot 0:func 0 megasas: FW now in Ready state Further, I don't see any means to instruct udev on which type of device node to create: character or block. I suspect I'm failing to understand exactly how udev is meant to work. I realise I could just cheat and run MegaCLI in /etc/rc.local, redirecting output to /dev/null; it creates the megaraid_sas_ioctl_node device node as part of its execution. I just thought using udev rules would be a) cleaner and b) a useful learning exercise. Perhaps I should just dump the above mknod command in /etc/rc.local... So how do I get udev to create the /dev/megaraid_sas_ioctl_node device node based on the presence of the megasas driver? Cheers Steve

    Read the article

  • Denying access to website via htaccess based on http header

    - by neekster
    I've been trying for ages to get this to work and I can't put my finger on it. What I'm trying to do is block access to a site from a number of countries, based on the CF-IPCountry header added by CloudFlare. I figured htaccess was a suitable way to do this. We are running LiteSpeed 4.2.4 on top of DirectAdmin for a control panel. The problem we having is the htaccess rule doesn't seem to do anything. Here's the rule we tried: SetEnvIf CF-IPCountry AU UnwantedCountry=1 Order allow,deny Deny from env=UnwantedCountry Allow from all That makes no difference at all, connections are still accepted. Just to check that the rule was at least being processed, I changed Allow from all to Deny from all, and connections were refused. So it appears to be a problem wit the variable. Here's the relevant headers that come in with the request. Connection: Keep-Alive Accept-Encoding: gzip CF-Connecting-IP: xx.xx.xx.xx CF-IPCountry: AU X-Forwarded-For: xx.xx.xx.xx.xx CF-RAY: c9062956e2d04b6 X-Forwarded-Proto: http CF-Visitor: {"scheme":"http"} Zone-Name: xx.com.au Hopefully someone can help me out, this has been driving me nuts for too long. Thanks

    Read the article

  • Formula-based Excel page headers

    - by Jake Krohn
    I'm using the "Rows to repeat at top" function in Excel's "Page Setup" dialog to ensure that a multi-row header block appears on every printed page of my worksheet. However, I'd like to be able to change certain bits of the header based on the content of the current page. I would simply like to display the value of one cell in the first row that is printed on the page. If this is my header: Section: xx And the data looks like this (columns are Section and Name): 1 Foo 1 Bar 2 Baz I want the "xx" in the header to be "1". If, further down on the next page, the value in the Section column is "3", I want that printed in the header of the next page. I originally thought that using the "OFFSET" function might help, e.g. ="Section: "&OFFSET(A2, 1, 0) But it only shows the offset from the original placement of the header, thus only working on page 1. The end document is a PDF, so right now I'm able to go back in with the "TouchUp Text Tool" in Acrobat and add the numbers page by page. But it gets to be a tedious process with 70+ page reports. Anyone have any better ideas that don't require me mucking up the original Excel document with inserted headers every N lines? This is Excel 2008 for Mac, if it makes a difference.

    Read the article

  • In a Shell scripts, check version of installed package, make a decision based on output

    - by DJDarkViper
    Looking to write a cross distro / cross version shell script that makes sure a forced version of PHP is installed Example: Ubuntu 12.04 has 5.3, Ubuntu 13.10 has 5.5, Debian 7 has 5.4 I need this script, when run on a distro that has an old version of PHP, to update the repo to point to a package for 5.4, and if the distro has too new of a version, can downgrade to 5.4 appropriately. Im still not entirely comprehensive of Shell/Terminals technical limit of what you can do with it, but ill be perfectly frank that im still not totally used to existing tools The best I can think at the moment is: php -v | grep "PHP 5" but that returns a bunch of potentially changeable granular characters (PHP 5.4.4-14+deb7u5 (cli) (built: Oct 3 2013 09:24:58) ). Im not sure what to pipe to after this to extract out the characters im interested in Im not sure if im being totally clear, im not sure how to ask this.. Basically, in an automated shell script for Linux distros, how do I extract the PHP version (and just the PHP version number preferably) and make a decision based on that output EDIT This line ended up doing pretty dang good php -v | grep "PHP 5" | sed 's/.*PHP \([^-]*\).*/\1/' | cut -c 1-3 Bit long in the tooth, but gives me "5.3", "5.4", and "5.5" which is exactly what I need to work with

    Read the article

  • Linux port-based routing using iptables/ip route

    - by user42055
    I have the following setup: 192.168.0.4 192.168.0.6 192.168.0.1 +-----------+ +---------+ +----------+ |WORKSTATION|------| LINUX |------| GATEWAY | +-----------+ +---------+ +----------+ 192.168.150.10 | 192.168.150.9 +---------+ | VPN | +---------+ 192.168.150.1 WORKSTATION has a default route of 192.168.0.6 LINUX has a default route of 192.168.0.1 I am trying to use the gateway as the default route, but route port 80 traffic via the VPN. Based on what I read at http://www.linuxhorizon.ro/iproute2.html I have tried this: echo "1 VPN" >> /etc/iproute2/rt_tables sysctl net.ipv4.conf.eth0.rp_filter = 0 sysctl net.ipv4.conf.tun0.rp_filter = 0 sysctl net.ipv4.conf.all.rp_filter = 0 iptables -A PREROUTING -t mangle -i eth0 -p tcp --dport 80 -j MARK --set-mark 0x1 ip route add default via 192.168.150.9 dev tun0 table VPN ip rule add from all fwmark 0x1 table VPN When I run "tcpdump -i eth0 port 80" on LINUX, and open a webpage on WORKSTATION, I don't see the traffic go through LINUX at all. When I run a ping from WORKSTATION, I get this back from some packets: 92 bytes from 192.168.0.6: Redirect Host(New addr: 192.168.0.1) Vr HL TOS Len ID Flg off TTL Pro cks Src Dst 4 5 00 0054 de91 0 0000 3f 01 4ed3 192.168.0.4 139.134.2.18 Is this why my routing is not working ? Do I need to put GATEWAY and LINUX on different subnets to prevent WORKSTATION being redirected to GATEWAY ? Do I need to use NAT at all, or can I do this with routing alone (which is what I want) ?

    Read the article

  • Delete cell content in Libre (Open) Office based on the cell value

    - by take2
    I have a huge csv file (tens of thousands of rows) that I need to filter based on different criteria. After trying to find a proper CSV editor, I decided to use LibreOffice Calc. CSVed is great, but it doesn't support neither UTF-8 nor macros for advanced filtering. So, there are 4 columns, 3 of which contain numbers (with decimal numbers) and 1 of which contains text. I'm trying to find a way to delete rows with a macro code. I can achieve the desired behavior with filters too, but it's annoying to type all of the filtering values over and over again and there doesn't seem to be a way to export the filter and us it repeatedly. These rows should be deleted: The ones that don't contain certain words in textual column (column A). There are a few thousand different words used in that column and I want to keep only the rows that contain one of about 30 words in that column. Additionally, the number is the other columns should be bigger than 3.8 (column B), 4.5 (column C) and smaller than 20 (column C). The row-deletion type is "Shift up". Hopefully I have explained it well. Thanks a lot in advance for your help!

    Read the article

  • Change source address based on destination IP

    - by hgj
    We have several "router" machines that gather a lot of external IP addresses on the same host and redirect, NAT or proxy the traffic to the internal network. They also act as routers for the machines on the internal network. This works fine, however I am unable to make the routing table, so I can change the source address, based on the destination a machine from the internal network want to access. Let's say I have a router, that has public addresses P1 (5.5.5.1/24) and P2 (5.5.5.2/24). All traffic goes through P1, but if necessary, the host is reachable on P2 too. This looks like this and works fine: > ip addr ... 1: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether aa:bb:cc:dd:ee:11 brd ff:ff:ff:ff:ff:ff inet 5.5.5.1/24 brd 5.5.5.255 scope global eth1 inet 5.5.5.2/24 brd 5.5.5.255 scope global secondary eth1:p2 ... Now I want to use P2 as the source address, if I want to access the Google DNS service for example (8.8.8.8). So I add a row in the routing table like: > ip route add 8.8.8.8 via 5.5.5.254 dev eth1 src 5.5.5.2 > ip route ... default via 5.5.5.254 dev eth1 5.5.5.0/24 dev eth1 proto kernel scope link src 5.5.5.1 8.8.8.8 via 5.5.5.254 dev eth1 src 5.5.5.2 ... But this does not work. If I ping 8.8.8.8, the host still uses P1 as the source address, and does not use P2 at all for outgoing connections. Am I doing it right? I guess not...

    Read the article

  • Exchange Activesync policy - can I make it not required for a user?

    - by TheCleaner
    Exchange 2010 sp2. I have a "C" level exec that wants to get his email on his android tablet. Easy enough. However, he doesn't want any Activesync policy applied to his device for remote wipe, etc. not even the default policy, and doesn't want to use OWA. I thought I knew Exchange pretty well, but can't find a Powershell command or anything that will allow a device to connect without enforcing at least some kind of policy. Is he out of luck using Activesync? I can set him up with POP3/IMAP, but would rather not.

    Read the article

  • MySQL based authentication with crypt()ed password fails in Apache 2.2

    - by Fester Bestertester
    I'm trying to set up a simple CalDAV/CardDAV server with a Radicale backend and an Apache 2.2 frontend. So far, it's all nice and simple, but I can't get the MySQL based authentication to work. I'd like to authenticate users against an existing MySQL database, and I need the REMOTE_USER variable to be set (pretty much like in the configuration examples for Radicale). I've tried mod_auth_mysql, which authenticated the users nicely, but failed to set the REMOTE_USER variable. The newer alternative seems to be mod_authn_dbd, which doesn't seem to like the crypted passwords in the MySQL database. According to the documentation, crypted passwords should work, so maybe I'm just missing a simple parameter. The configuration looks like this: DBDriver mysql DBDParams "sock=/var/run/mysqld/mysqld.sock dbname=myAuthDB user=myAuthUser pass=myAuthPW <Directory /> AllowOverride None Order allow,deny allow from all AuthName 'CalDav' AuthType Basic AuthBasicProvider dbd require valid-user AuthDBDUserPWQuery "SELECT crypt FROM myAuthTable WHERE id=%s" </Directory> I've tested the query, it works fine. And as mentioned before, mod_auth_mysql worked nicely against the same database, but didn't set the required variables. Am I just missing some configuration parameter? Or is mod_authn_dbd just not the right tool to achieve what I want?

    Read the article

  • Hard Disk based storage library

    - by Ryan M.
    We have a Tandberg T24 tape device to handle all of our long term backups right now. We decided that we're not backing up nearly everything that we would like to and that we still have a lot of vulnerabilities. To get to where we want to be, we're going to have to back up a lot more servers than we're currently doing. All of our internal servers have some sort of directly attached drive (I.e. LaCie Raid box or a simple portable hard drive) doing backups, but what we want to do is get those backups off-site. The current tape drive is directly attached via SCSI to a Windows Server 2008 File Server. So to back up anything to tape, it has to be funneled through the File Server. With the current increase that we have planned, I don't think that funneling everything through the File Server is the right course of action and I'm thinking that maybe a second backup device would be more appropriate. I would like your input on a couple of ideas. 1) Doing HDD instead of tape. Tape is hard to deal with. We have a regular rotation cycle, so they don't need years and years of shelf life, so I'm wondering if something HDD-based would be better. 2) Something accessible over the network. Instead of having the device directly attached to one specific machine, have it available to all the servers over the network. Our File Server is a 12-disk raid 6 set up.. I was thinking something like that, but with no raid involved, all disks are stand alone so they can be used/installed/removed on an individual basis. Does any such thing exist? Thanks for your ideas. I'm really interested to hear about some of the solutions you guys are using..

    Read the article

  • VirtualName-based local development host behind corporate proxy (MAMP)

    - by geerlingguy
    I am behind a corporate proxy server/firewall, and this firewall seems to not be too happy with my idea of local development. On my home computer (Mac/Leopard), I have MAMP running, with a rule in /etc/hosts that directs dev.example.com to 127.0.0.1, and I have a virtualhost set up in the httpd.conf file which works great for me. However, at work, I set up the exact same configuration, but am not able to access dev.example.com, likely due to some address/DNS translation going on via the proxy server. Here are the relevant details from Terminal: $ ping dev.example.com PING dev.example.com (127.0.0.1): 56 data bytes 64 bytes from 127.0.0.1: icmp_seq=0 ttl=64 time=0.025 ms $ host dev.example.com Host dev.example.com not found: 3(NXDOMAIN) I've tried adding dev.example.com to the list of bypass addresses in System Preferences (the 'Bypass proxy settings for these Hosts & Domains' list), but that had no effect. Is there any way I can develop locally using name-based hosts at work? I can access localhost, but can't get to the dev.example.com (or any other custom virtualhosts) here at work, which complicates other matters related to the sites on which I'm working...

    Read the article

  • Issues with returned mail sent to web-based email domains

    - by Beeder
    My company is having issues with returned mail that we send out to external domains. A few weeks ago we replaced a firewall and changed ISP providers and began subsequently having issues RECEIVING emails from external sources because we hadn't updated our new IPs in the DNS records. After making the necessary configuration changes and setting up SMTP forwarding over port 25 to our mail server, everything was working fine up until a few days ago when we started having mail sent out returned to us. We aren't having any trouble communicating internally (to recipients on our domain) but it seems we're having trouble with outbound messages to web-based email recipients. (@hotmail, @live, @yahoo, @gmail...etc) Currently we are running Server 2003 SP2 and exchange 2003. I'm very unfamiliar with configuring Exchange and could really use some help in narrowing down the possibilities. I did some research and am becoming suspicious of Sender ID being the culprit due to our recent IP address change and the likelihood that Sender ID is identifying us as a fake domain. Am I going in entirely the wrong direction? Any input or guidance would be infinitely appreciated. This is the message that is returned when an outbound message fails...this particular one was sent to my @live.com account for testing purposes... Your message did not reach some or all of the intended recipients. The following recipient(s) could not be reached: [email protected] on 5/17/2012 3:02 PM There was a SMTP communication problem with the recipient's email server. Please contact your system administrator. Unfortunately, messages from xx.x.xx.x weren't sent. Please contact your Internet service provider since part of their network is on our block list. I tried a reverse DNS lookup and found that we are set up as a Forward-confirmed reverse DNS. So do I just need to contact my ISP and have them correct their DNS records or is this something I can solve on our end??

    Read the article

  • Routing connections through VPN based on hostname (not IP range)

    - by Michal M
    This bugs me immensly. I need to connect to client's network through VPN. But I definitely do not want to send all the traffic through client's network so this option is out of question. What I need basically is for the OS to know that all client's network subdomains (*.example.com) need to go through the VPN connection. I tried a couple of options: Changing order of services and setting the VPN on top, but this works the same as "Send all traffic over VPN connection". Using "VPN on Demand" option from network advanced options, but this feature is quite rubbish to be honest. Seems to work only in Safari (?!) and it doesn't route the connection, but it basically triggers the OS to connect to the selected VPN. The reason I need it to work based on hostnames rather than IP range is simple - my client has a lot of servers inside his network and it's impossible for me to remember all IPs. They are all within a range, but this doesn't help me remembering. Another option would be to put the VPN connection on the bottom of network services and untick "Send all traffic..." and then put all known hostnames in hosts file, but considering there could be hundreds of servers (therefore hostnames and ips too) it ridiculous job. And if new server appears on the network I'd need to edit the hosts file again. Sisyphean labours. However this works on Windows very simply. If a hostname is not available through default network interface, then it seems to try VPN connection and this works brilliantly. So, how can I achieve that on Mac, then? I know client's internal DNS addresses if that is of any help (like directing a certain domains through a different DNS)? PS. Using latest version 10.6.6. PS2. I am using VPN to access intranet, version control servers (svn://), samba shares and for SSH access to servers.

    Read the article

  • Couldn't find package - But package is listed in the Packages file

    - by Chris
    (Quoted items are redacted elements) I am using a private repository and an currently trying to repackage some packages 3rd-party packages. I extract the package, make a few modifications (just the control files to fit with company policy - though sometimes file install locations though not in this case) and repackage (and usually rename). Normally I copy the files into a new blank debhelper project and reconstruct the package, however, with a recent one I attempting to convert and some libraries and stuff aren't linking properly (I did copy the postinst, postrm, and preinst files along with all DEDIAN files exactly), the original package worked, but my repackage doesn't, despite providing the same files in the same locations and the same postinst and preinst. So I was attempting to just modify the current packages control files (as the original package is not very good and will not list in our repository and getting a better one from the 3rd party is not an option). I also renamed the package. I did the following: dpkg-deb -R "directory" Modify DEBIAN/control dpkg-deb -b "directory" "package name I want" I did this and put it in our repository. The package shows up in the "Packages" file on the repository and running apt-get update on the client side shows the package in: /var/lib/apt/lists/"server"_"location"_Packages However when I do an apt-get install on the package name (as listed in the Packages file - I did a copy paste) it says it can't find the package. Same with an apt-cache search The Packages listings is as follow (name redacted): Package: "package name" Priority: extra Section: unknown Maintainer: "maintainer" Architecture: any Version: 1.0-lucid5 Depends: libc Filename: "directory"/"package_filename" Size: 2206292 MD5sum: "md5sum" SHA1: "sha key" SHA256: "sha256 key" Description: "description" I am running as sudo (and tried as root as well). I don't understand why apt-get won't see the package. Can you point out any flaws in what I have done, or perhaps some help on getting apt-get to properly see the package. Or perhaps an alternative. I am not even sure if this is a valid way to repackage something. Thanks.

    Read the article

  • dpkg E: Sub-process /usr/bin/dpkg returned an error

    - by user81269
    I decided to shift around my partitions on my hard drive for a fresh install of Kubuntu. I booted my Ubuntu 10.10 live disc, shifted everything around and attempted to install grub and it didn't work, so I burnt an Ubuntu 12.04 disc and installed it. I got the computer working and wanted to install some packages, but didn't have an internet connection at the time. So (I know this was stupid) I got some debs from previous versions of Ubuntu, as I needed my music, and the other install took a long of time to boot. Once I got my internet connection back, everything worked ok, for a little while. Then I stumbled upon this problem after removing ten broken packages using synaptic: drhax@Spamotard:~$ sudo apt-get install -f Reading package lists... Done Building dependency tree Reading state information... Done The following packages will be REMOVED: libgtk2.0-cil 0 upgraded, 0 newly installed, 1 to remove and 417 not upgraded. 1 not fully installed or removed. After this operation, 2,638 kB disk space will be freed. Do you want to continue [Y/n]? y (Reading database ... 103052 files and directories currently installed.) Removing libgtk2.0-cil ... E: File does not exist: /usr/share/cli-common/packages.d/policy.2.6.gtk-dotnet.installcligac dpkg: error processing libgtk2.0-cil (--remove): subprocess installed post-removal script returned error exit status 1 Errors were encountered while processing: libgtk2.0-cil E: Sub-process /usr/bin/dpkg returned an error code (1) Help would be appreciated. This is my first post, but I do know fair bit about Ubuntu, so feel free to point out any stupid mistakes I have made.

    Read the article

  • "io: postinst-must-call-ldconfig" when creating a package

    - by egarcia
    I'm trying to create an ubuntu .deb package for the (pretty awesome) Io Language. I am not the developer of that language, so I'm not familiar with its sourcecode yet. This is my first attempt at creating a .deb file. In order to create the .deb, I'm following these instructions: http://www.webupd8.org/2010/01/how-to-create-deb-package-ubuntu-debian.html So far I've been able to create a .deb file (io_2010.06.01-1_amd64.deb) and a changes file (io_201.06.01-1_amd64.changes). I'm using lintian to check the changes file, and it reports an issue I don't know how to resolve: $ lintian -Ivi io_2010.06.01-1_amd64.changes ... (lots of messages) I: io: no-symbols-control-file usr/lib/libiovmall.so I: io: no-symbols-control-file usr/lib/libgarbagecollector.so I: io: no-symbols-control-file usr/lib/libbasekit.so E: io: postinst-must-call-ldconfig usr/lib/libiovmall.so N: N: The package installs shared libraries in a directory controlled by the N: dynamic library loader. Therefore, the package must call "ldconfig" in N: its postinst script. N: N: Refer to Debian Policy Manual section 8.1.1 (ldconfig) for details. N: N: Severity: serious, Certainty: certain N: N: Removing /tmp/OYuNShEHYz ... I've read the debian manual 8.8 section. I think I understand what the problem is (I need to make sure that ldconfig is invoked "somewhere", possibly on a place called "posinst") but I don't know how to resolve it (i.e. where this "posinsts" file is and how should I change it). The current way of installing Io in Ubuntu is basically running sudo make install and then sudo ldconfig. Maybe the makefile should be modified so ldconfig is called from it? I don't know. Thanks a lot.

    Read the article

  • Two views of Federation: inside out, and outside in

    - by Darin Pendergraft
    IDM customers that I speak to have spent a lot of time thinking about enterprise SSO - asking your employees to log in to multiple systems, each with distinct hard to guess (translation: hard to remember) passwords that fit the corporate security policy for length and complexity is a strategy that is just begging for a lot of help-desk password reset calls. So forward thinking organizations have implemented SSO for as many systems as possible. With the mix of Enterprise Apps moving to the cloud, it makes sense to continue this SSO strategy by Federating with those cloud apps and services.  Organizations maintain control, since employee access to the externally hosted apps is provided via the enterprise account.  If the employee leaves, their access to the cloud app is terminated when their enterprise account is disabled.  The employees don't have to remember another username and password - so life is good. From the outside in - I am excited about the increasing use of Social Sign-on - or BYOI (Bring your own Identity).  The convenience of single-sign on is extended to customers/users/prospects when organizations enable access to business services using a social ID.  The last thing I want when visiting a website or blog is to create another account.  So using my Google or Twitter ID is a very nice quick way to get access without having to go through a registration process that creates another username/password that I have to try to remember. The convenience of not having to maintain multiple passwords is obvious, whether you are an employee or customer - and the security benefit of not having lots of passwords to lose or forget is there as well. Are enterprises allowing employees to use their personal (social) IDs for enterprise apps?  Not yet, but we are moving in the right direction, and we will get there some day.

    Read the article

  • Xml configuration versus Annotation based configuration

    - by Abarax
    In a few large projects i have been working on lately it seems to become increasingly important to choose one or the other (XML or Annotation). As projects grow, consistency is very important for maintainability. My question is, what do people prefer. Do you prefer XML based or Annotation based? or Both? Everybody talks about XML configuration hell and how annotations are the answer, what about Annotation configuration hell?

    Read the article

< Previous Page | 107 108 109 110 111 112 113 114 115 116 117 118  | Next Page >