Search Results

Search found 27120 results on 1085 pages for 'link building'.

Page 330/1085 | < Previous Page | 326 327 328 329 330 331 332 333 334 335 336 337  | Next Page >

  • Ubuntu rm not deleting files

    - by ILMV
    My colleague and I have been struggling with deleting a directory and its contents. We are working on a new version of our websites source code on Ubuntu 8.04 (dir: /var/www/websites), what we want to do is delete the websites directory and recreate it from a .tar backup we created a couple weeks ago. The purpose of this is so we can run our deployment procedure in a local environment before we do so on our live / public environment. We use this command: rm -r websites This deletes the directory and the files within it. The problem occurs when we un-tar our backup file and view the website we are getting files that don't exist in the .tar backup, in fact these files were only created a few days ago and should have been deleted. We delete the directory once more in the manner stated above, we then create a new websites directory using the mkdir command. Strangely at this stage the 'deleted files' do not come back, but if we unpack our .tar file the 'deleted files' appear again. Is there a way to ensure these files are deleted, or at least the pointers that associate them with said directory. Our .tar backup does not include these files We do not want to use the shred command We do not want to use 3rd party applications Solution should be functional via terminal (SSH) Many thanks! EDIT Er... we fixed it. Turns out the files that are reappearing are because of a link we have to another directory (outside the /var/www/websites), we were restoring the link but not deleting the files on the other end. D'oh! Many thanks for your help guys... friday afternoon syndrome :-)

    Read the article

  • Google Chrome - Issues with download dialogs using BinaryWrite

    - by Mila
    Hello, I have an empty ASP .NET page with the code behind to support downloading of PDF files. The page is called from a link-like web control that has NavigateUrl set to this page. In short, I am using the following for streaming: Response.Buffer = false; Response.ClearHeaders(); Response.ContentType = "application/x-pdf"; Response.AddHeader("Content-Disposition","attachment; filename="MyPDFFile.pdf"); byte[] binary = (dataReaderRemote[DataPDFFieldName]) as byte[]; //dataReaderRemote[DataPDFFieldName] has previously retrieved data if (binary != null) { MemoryStream memoryStream = new MemoryStream(binary); int sizeToWrite = CHUNKSIZE; //CHUNKSIZE=1024 for (int i = 0; i < binary.GetUpperBound(0) - 1; i = i + CHUNKSIZE) { if (!Response.IsClientConnected) return; if (i + CHUNKSIZE >= binary.Length) sizeToWrite = binary.Length - i; byte[] chunk = new byte[sizeToWrite]; memoryStream.Read(chunk, 0, sizeToWrite); Response.BinaryWrite(chunk); Response.Flush(); } } Response.Close(); IE as well as Firefox bring the download prompt window asking you whether you wish to open or save the file, while the user remains on the same page containing the link. However, Google Chrome opens a new blank tab and downloads the file automatically. Is there any way to prevent Chrome from opening the extra blank and therefore useless tab? I am using the Google Chrome version 5.0.375.55 (Official Build 47796) on Windows XP. Thanks in advance! Mila

    Read the article

  • Are relative-path symlinks reliable on Rackspace Cloud Sites?

    - by Jakobud
    Rackspace's Cloud Sites have a lot of stupid limitations. For example, no SSH (in or out), no shell, no RSYNC, etc... (even through cron). Recently I learned that you can't reliably use symlinks in Cloud Sites. Apparently this is because the absolute path of your sites could change at any moment, since it's a shared host environment split up between many disks/servers. I guess different account's sites get moved from disk to disk whenever Rackspace decides to. Supposedly to increase efficiency across the board. So after talking with a Rackspace tech, he said they cannot guarantee that symlinks would always work. Obviously this is because if you have a symlink that use's an absolute path like this: //mnt/disk-34566/home/user34566/files/sites/www.mysite.com/mydir If you files go moved to a different disk (or whatever they do), then the absolute path would be different and the link would now be broken. That makes sense. So next, I asked the Rackspace tech if relative path symlinks were reliable. So if I have the following link: files/sites/www.mysite.com/mylink --> ../www.myothersite.com/anotherdir You can see that the symlink simply points to a nearby directory's sub-directory. He said they cannot guarantee that even those would always work either. Since it uses a relative path to another nearby directory I'm not sure how it could ever break from something Rackspace would do. Do relative symlinks somehow rely on absolute paths underneath? Or is Rackspace using some weird custom filesystem where they will break from absolute path changes? It seems like a relative-path symlink would be fine and would only break if the user did something to mess up the directories involved. But when the tech's say that they "don't officially support symlinks of any kind" that makes me hesitant to use them for large commercial websites in Cloud Sites. Can anyone with Rackspace experience give input on this topic?

    Read the article

  • Should I enabled 802.3x hardware flow control?

    - by Stu Thompson
    What is the conventional wisdom regarding 802.3x flow control? I'm setting up a network at a new colo and am wondering if I should be enabling it or not. My oh-cool-a-bright-and-shiny-new-toy self wants to enable it, but this seems like one of those decisions that could blow up in my face later on. My network: An HP ProCurve 2510G-24 switch A pair of Debian 5 HP DL380 G5's with built-in NC373i 2-port NIC LACP'd as one link. 9000 jumbo frames enabled. (Application) A pair of hand-built Ubuntu server with 4-port Intel Pro/1000 LACP'd as one link. 9000 jumbo frames enabled. (NAS) A few other servers with with single 1Gbps ports, but one with 100Mbps. Most of this kit is 802.3x. I've been enabling it as I go along, and am about to test the network. But as my 'go live' day nears, I am worried about the 802.3x decision as I've never explicitly used it before. Also, I've read some 10-year old articles out there on the Intertubes that warn against using flow control. Should I be enabling 802.3x hardware flow control?

    Read the article

  • Instabilities with Bridged and bonded interfaces

    - by Henry-Nicolas Tourneur
    I did post yesterday to get a working setup with several bridged interfaces used for virtual machines (KVM/libvirt). One of the bridged interface is just using eth3 as its ports while the second one (public traffic) is using an ethernet bonded interface. That setup is working but not all the time ! I can start a download from a vm, then it will stop and freeze! So I don't know if my bridge parameters are correct, could you check the below config ? iface eth3 inet manual auto bond0 iface bond0 inet manual slaves eth1 eth2 pre-up ip link set bond0 up down ip link set bond0 down auto br0 iface br0 inet static address 10.160.0.7 netmask 255.255.255.128 bridge_ports eth3 bridge_fd 9 bridge_hello 2 bridge_maxage 12 bridge_stp on auto br0:1 iface br0:1 inet static address 10.160.0.9 netmask 255.255.255.255 auto br0:2 iface br0:2 inet static address 10.160.0.10 netmask 255.255.255.255 auto br1 iface br1 inet static address 217.4.40.242 netmask 255.255.255.240 gateway 217.4.40.241 pre-up /etc/network/firewall start bridge_ports bond0 bridge_fd 9 bridge_hello 2 bridge_maxage 12 bridge_stp on auto br1:1 iface br1:1 inet static address 217.4.40.252 netmask 255.255.255.255 auto br1:2 iface br1:2 inet static address 217.4.40.253 netmask 255.255.255.255 And yes, it also sometimes speaks about martian on the host: kernel: [249146.055172] martian source 10.160.0.17 from 10.160.0.10, on dev vnet2 kernel: [249146.073122] ll header: ff:ff:ff:ff:ff:ff:54:52:00:76:c3:5c:08:06

    Read the article

  • 2 nics. 2 Defaults Gateways

    - by andre.dias
    Here is my scenario: i have this server with 2 nics, each one with different IPs, connected to differents routers. Almost everything is configured whe way i need. Traffic coming from eth0 exits using eth0, traffic coming from eth1 exits using eth1. And there is a default gateway configured. $route: default IP 0.0.0.0 UG 0 0 0 eth0 With this configuration, the traffic generated in the server is going out using eth0 (lynx www.google.com for example). The problem is: the Internet link from eth0 went down today. The traffic coming from eth1 was ok...no problem. But the traffic generated in the server was a problem...the default gateway was out...no access do the Internet anymore (no more lynx www.google.com) So i added a new default gateway configuration, pointing to eth1. For 30 minutes i kept that way...2 default gateways, but just one was "working"...and everything was working just fine. But then i removed de eth0 gateway entry because, well, 2 default gateways is kind of weird. My question: is there any problem on keeping these 2 default gateways, one for each? So i don´t need to do nothing when one link go down again? $route: default IP1 0.0.0.0 UG 0 0 0 eth0 default IP2 0.0.0.0 UG 0 0 0 eth1

    Read the article

  • SLES AutoYaST Script Validity Verification

    - by Xerxes
    Does anyone here write their own customized AutoYaST scripts for building SLES servers? I'm not talking about generating them with yast2 autoyast. If so, have you found a way to verify the syntax? xmllint is good as far as telling you that the XML syntax is valid, but with an upto date DTD, it can't tell you anything more, and the shipped DTDs are out-of-date. I've opened a ticket with Novell on this, but who knows when and what I'll hear back.

    Read the article

  • Multiple IPs on firewall, are these virtual interfaces or what?

    - by Jakobud
    We have 5 static IP addresses from our ISP: XXX.XXX.XXX.180 XXX.XXX.XXX.181 XXX.XXX.XXX.182 XXX.XXX.XXX.183 XXX.XXX.XXX.184 On our firewall box, the NIC that is connected to our cable modem, appears to have all 5 IP addresses set on it. A previous IT guy set this thing up, and I'm not sure exactly what he did. Are these virtual interfaces on this NIC or what? Here is my ip addr output for that NIC: rwd0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether XX:XX:XX:XX:XX:XX brd ff:ff:ff:ff:ff:ff inet XXX.XXX.XXX.180/24 brd XXX.XXX.XXX.186 scope global rwd0 inet XXX.XXX.XXX.181/29 brd XXX.XXX.XXX.186 scope global rwd0:FWB9 inet XXX.XXX.XXX.182/29 brd XXX.XXX.XXX.186 scope global secondary rwd0:FWB10 inet XXX.XXX.XXX.183/29 brd XXX.XXX.XXX.186 scope global secondary rwd0:FWB11 inet XXX.XXX.XXX.184/29 brd XXX.XXX.XXX.186 scope global secondary rwd0:FWB12 inet6 fe80::250:8bff:fe61:5734/64 scope link valid_lft forever preferred_lft forever I'm a bit new to firewalls and networking so I'm just trying to figure out what he had going on here. I know he used Firewall Builder to configure the iptables rules, maybe that has something to do with the "FWB" I see in those names? So my questions are: What is going on here? Virtual Interfaces? Or something else? If we want to put in a second firewall in parallel with this firewall but we only want it to handle traffic to XXX.XXX.XXX.182, how do we get rid of the static XXX.XXX.XXX.182 address on this existing firewall box?

    Read the article

  • Load balancing with 2 wireless cards

    - by user2544786
    I'm thinking about building a wireless load balancer (if that makes sense). For example, the first wireless card will accept all connections for ip 192.168.1.1 and the second card will serve requests for 192.168.1.2. I know that I can assign both IPs to a single card and all requests will be served by a single wireless card. Would it be better (more bandwidth, more stable connection, etc?) to have two physical cards instead?

    Read the article

  • Performance of ClearCase servers on VMs?

    - by Garen
    Where I work, we are in need of upgrading our ClearCase servers and it's been proposed that we move them into a new (yet-to-be-deployed) VMmare system. In the past I've not noticed a significant problem with performance with most applications when running in VMs, but given that ClearCase "speed" (i.e. dynamic-view response times) is so latency sensitive I am concerned that this will not be a good idea. VMWare has numerous white-papers detailing performance related issues based on network traffic patterns that re-inforces my hypothesis, but nothing particularly concrete for this particular use case that I can see. What I can find are various forum posts online, but which are somewhat dated, e.g.: ClearCase clients are supported on VMWare, but not for performance issues. I would never put a production server on VM. It will work but will be slower. The more complex the slower it gets. accessing or building from a local snapshot view will be the fastest, building in a remote VM stored dynamic view using clearmake will be painful..... VMWare is best used for test environments (via http://www.cmcrossroads.com/forums?func=view&catid=31&id=44094&limit=10&start=10) and: VMware + ClearCase = works but SLUGGISH!!!!!! (windows)(not for production environment) My company tried to mandate that all new apps or app upgrades needed to be on/moved VMware instances. The VMware instance could not handle the demands of ClearCase. (come to find out that I was sharing a box with a database server) Will you know what else would be on that box besides ClearCase? Karl (via http://www.cmcrossroads.com/forums?func=view&id=44094&catid=31) and: ... are still finding we can't get the performance using dynamic views to below 2.5 times that of a physical machine. Interestingly, speaking to a few people with much VMWare experience and indeed from running builds, we are finding that typically, VMWare doesn't take that much longer for most applications and about 10-20% longer has been quoted. (via http://www.cmcrossroads.com/forums?func=view&catid=31&id=44094&limit=10&start=10) Which brings me to the more direct question: Does anyone have any more recent experience with ClearCase servers on VMware (if not any specific, relevant performance advice)?

    Read the article

  • RAID 5 configuration and future expansion

    - by Alexis Hirst
    hi, I am building a PC to act as a file server among other things, and I was wondering whether it is a good idea to create 2 partitions on the RAID 5 array, one for OS one for data, or to have a separate disk for OS and use array for data. Also, one day i may want to add another disk to the array, so would there be any issues if I had the OS partition on the RAID5 array when it came to resizing the data partition?

    Read the article

  • Authenticating Linked Servers - SQL Server 8 to SQL Server 10

    - by jp2code
    We have an old SQL Server 2000 database that has to be kept because it is needed on our manufacturing machines. It also maintains our employee records, since they are needed on these machines for employee logins. We also have a newer SQL Server 10 database (I think this is 2008, but I'm not sure) that we are using for newer development. I have recently learned (i.e. today) that I can link the two servers. This would allow me to access the employee tables in the newer server. Following the SF post SQL Server to SQL Server Linked Server Setup, I tried adding the link. In our SQL Server 2000 machine, I got this error: Similarly, on our SQL Server 10 machine, I got this error: The messages, though worded different, probably say the same thing: I need to authenticate, somehow. We have an Active Directory, but it is on yet another server. What, exactly, should be done here? A guy HERE<< said to check the Security settings, but did not say what else to do. Both servers are set to SQL Server and Windows Authentication mode. Now what?

    Read the article

  • Number of Splitters on Coaxial Connection and Cable Internet Quality of Service

    - by Matthew Ruston
    Does running multiple coaxial splitters on a single coaxial cable line effect quality of service for cable internet connections? Suppose there are 2-4 splitters between the cable line coming into a building before the connection to a cable modem, does this negatively effect the latency, throughput, etc of a cable internet connection by any measurable amount? Is there a maximum number of times that a coaxial cable can be split into multiple internet and TV connections before QoS suffers?

    Read the article

  • Linking to network shares from Sharepoint pages

    - by Russell C
    So the place I work decided to set up a Microsoft Sharepoint 2010 server for task management and I (as the lowly entry-level intern) have been tasked with "figuring it out." One thing that the end users really, really, really, want is the ability to link to network shares (that are readable by anyone who will be using sharepoint) from a Sharepoint web page. In order to do this, I have edited the HTML manually with several lines that look like the following: <a href="file://server/share">Server Share</a> This works (sometimes) but the link reported by Sharepoint is often wrong and editing pages that contain these links will mangle the code such that when I open it, the code no longer looks like what it did when I last hit save (breaking all those links). Obviously this is not sustainable. I've been told by coworkers that "It worked that way at the last place I worked" but I haven't found out how yet. Any ideas on how this would work or am I barking up the wrong tree? None of the knowledge searches I've done shed any light on the sitataion. Thanks for any help! -Russell P.S. It should be noted that the file option in an href tag ONLY works in IE (which is a real bummer since we mostly use Firefox).

    Read the article

  • Installing MySQL-Ruby on Linux and Ruby 1.9.2

    - by Klam
    I am having an absurdly difficult time getting MySQL-Ruby to install on RedHat 4 using Ruby 1.9.2. I am behind a company proxy that prevents pretty much any package tool from connecting to external repositories so "gem install mysql" isn't going to cut it. I have tried installing the mysql-ruby gem locally but it fails with a mysterious: $gem install mysql-2.8.1.gem Building native extensions. This could take a while... ERROR: Error installing mysql-2.8.1.gem: ERROR: Failed to build gem native extension. /ns/local/apps/internal/SWS/MetricsPublisher/ruby/bin/ruby extconf.rb checking for mysql_query() in -lmysqlclient... no checking for main() in -lm... yes checking for mysql_query() in -lmysqlclient... no checking for main() in -lz... yes checking for mysql_query() in -lmysqlclient... no checking for main() in -lsocket... no checking for mysql_query() in -lmysqlclient... no checking for main() in -lnsl... yes checking for mysql_query() in -lmysqlclient... no checking for main() in -lmygcc... no checking for mysql_query() in -lmysqlclient... no *** extconf.rb failed *** Could not create Makefile due to some reason, probably lack of necessary libraries and/or headers. Check the mkmf.log file for more details. You may need configuration options. I have also tried building the module myself by following the included readme. The results: $ruby extconf.rb --with-mysql-include=/path_to_my_sql_headers/mysql/include/ --with-mysql-lib=/path_to_my_sql_lib/mysql/lib/ checking for mysql_query() in -lmysqlclient... no checking for main() in -lm... yes checking for mysql_query() in -lmysqlclient... no checking for main() in -lz... yes checking for mysql_query() in -lmysqlclient... no checking for main() in -lsocket... no checking for mysql_query() in -lmysqlclient... no checking for main() in -lnsl... yes checking for mysql_query() in -lmysqlclient... no checking for main() in -lmygcc... no checking for mysql_query() in -lmysqlclient... no *** extconf.rb failed *** Could not create Makefile due to some reason, probably lack of necessary libraries and/or headers. Check the mkmf.log file for more details. You may need configuration options. Does anybody have any ideas? Quite frankly, I don't even care if MySQL-Ruby specifically works, I just want ANY means of connecting to a MySQL DB through a ruby call in ruby 1.9. Thanks.

    Read the article

  • VMWare tools not installing with an error

    - by JDS
    VMWare tools not installing on Ubuntu 12.04. I'm using Chef to manage the installation, but the Apt commands fail if run manually. I'm using the VMWare tool Debian repo. Example: $ cat /etc/apt/sources.list.d/vmware-tools-source.list deb http://packages.vmware.com/tools/esx/5.0u2/ubuntu precise main When trying to install, most packages seem to go ok, but one, "vmware-tools-foundation", does not. Example: $ apt-get -q -y install vmware-tools-esx-nox=8.6.10-1.precise Reading package lists... Building dependency tree... Reading state information... You might want to run 'apt-get -f install' to correct these: The following packages have unmet dependencies: vmware-tools-esx-kmods-3.2.0-23-generic : Depends: vmware-tools-foundation (>= 8.6.10) but it is not going to be installed vmware-tools-esx-nox : Depends: ...snip list of deps... E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). $ apt-get -f install Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following extra packages will be installed: vmware-tools-foundation The following NEW packages will be installed: vmware-tools-foundation 0 upgraded, 1 newly installed, 0 to remove and 118 not upgraded. 7 not fully installed or removed. Need to get 0 B/5,886 B of archives. After this operation, 86.0 kB of additional disk space will be used. Do you want to continue [Y/n]? y (Reading database ... 103499 files and directories currently installed.) Unpacking vmware-tools-foundation (from .../vmware-tools-foundation_8.6.10-1.precise_all.deb) ... VMware Tools cannot install because it appears that another installation of VMware Tools is already present. Please remove the previous installation and then attempt to install this copy of VMware Tools again. dpkg: error processing /var/cache/apt/archives/vmware-tools-foundation_8.6.10-1.precise_all.deb (--unpack): subprocess new pre-installation script returned error exit status 1 Errors were encountered while processing: /var/cache/apt/archives/vmware-tools-foundation_8.6.10-1.precise_all.deb E: Sub-process /usr/bin/dpkg returned an error code (1) The key seems to be this error: "VMware Tools cannot install because it appears that another installation of VMware Tools is already present. Please remove the previous installation and then attempt to install this copy of VMware Tools again." However, I've tryed removing and purging and can't seem to "trick" VMWare tools into thinking the packages are gone. Apt thinks they are gone. Is there some service/file/cache/lock left that VMWare tools sees that makes it think that VMWare tools are still installed? I've googled and googled but there is no answer to this question with my particular circumstances on the interwebs. VMWare's documentation of this error is minimal.

    Read the article

  • Removing default mysql on Slackware 13

    - by bullettime
    I was playing around the default mysql that comes with Slackware 13, and I think I broke it somehow. I don't want to fix it, I'd like to start from scratch, building from source and everything, but first I have to remove this broken installation. How can I do this?

    Read the article

  • How to install svn 1.8.5 with neon on Mavericks?

    - by Alex
    Does anyone of you installed svn 1.8.* together with neon on OS X Mavericks? I followed this tutorial: http://jason.pureconcepts.net/2012/10/updating-svn-mac-os-x/ But after trying to configure svn to use neon: ./configure --prefix=/usr/local --with-neon I get this warning: configure: WARNING: unrecognized options: --with-neon Building and installation work fine after this, but of course I can not connect to WEBDAV repositories.

    Read the article

  • Install self-signed certificate on local server (iis)

    - by ile
    On this page there are instructions on how to create self-signed cert (on apache) and how to install this certificate on server. I found this page (http://www.visualwin.com/SelfSSL/) with instructions on how to create self-signed certificate on windows (iis). I followed instructions and when I type https://myip/myapp (this leads to localhost because I set my router's port forwarding to go to localhost on my pc) this part works. From the first link, the most important part is this: What needs to be installed in IE is actually the Root CA Certificate. In the how-to above, the Root CA Certificate is called ca.crt. Copy this file to the server that is running QuickBooks. The following is for IE6: - Open IE - Tools - Internet Options - Content - Certificates - Trusted Root Certification Authorities Tab - Import, Next, Browse to 'ca.crt' - Next, Next, Finish, Close, OK The part that is missing in second link is that there is no instruction on how to get .crt file, so I tried to get it myself. What I did was following: I opened https://myip/myapp in Firefox and then "This Connection is Untrusted" screen appeared. Then I clicked on "Add Exception" and then below "Certificate Status" I clicked "View". Under the Details tab I clicked on Export and choosed Save as type: "X 509 Certificate (PEM)" and file was saved with .crt extension. Then I opened IE8 and followed above instructions. After opening https://myip/myapp in IE8 I always get warning screen. Does anyone knows what am I doing wrong? Thanks, Ile

    Read the article

  • Store Varnish cache in hard disk

    - by Great Kuma
    Hello, The situation is: Im building PHP application, and need http caching. Varnish is great, and lots of people tell me that Varnish store the cached data in RAM. But I want it cached in hard disk. Is there any way to store the Varnish cached data in hard disk? thanks.

    Read the article

  • What PSU would be needed for a mid-range computer?

    - by iconiK
    I am building a mid-range computer primarily for gaming and graphic design. With the following components, what power supply unit would be good, in terms of having ample power for future expansion, with good efficiency and quiet operation, but most important, reliability in the long (5+ years) run? Gigabyt GA-H67MA-UD2H LGA 1155 Intel Core i5 2300 2.8GHz Crucial CT2KIT51264BA1339 2x4GB Kit ASUS HD 6850 DirectCU Intel X25-V 40GB SSD 2xSeagate 7200.12 1TB HDD RAID 1 Antec NSK-3480 µATX Case

    Read the article

  • Web server build end user acceptance testing.

    - by Zak
    I have a web server image that I am responsible for building across multiple servers. I have a list of about 50 URL's that I am supposed to go to and confirm the correct content is showing up. Which automated tools exist to do this easily (without writing a bunch of curl requests and regexes in a script file) .

    Read the article

  • What does a status of "Backup" mean for Windows 7 local user profiles?

    - by Howiecamp
    Summary: Upon logging on to Windows 7 RTM I get a message that my profile can't be loaded and a temporary user profile is created. I logged off and back on as Administrator. The user profiles dialog shows my user profile with a Type of "Local" and a Status of "Backup" rather than "Local" which it should be. How can I change this to make my user profile accessible? The long story: My PC has a single hard drive partitioned into a C: and a D:. I'd moved my user profile directory (c:\Users) to d:\Users, removed c:\Users and then used mklink.exe to create a directory symbolic link c:\Users -- d:\Users. Worked like a charm since I did it. Today, I make a System Restore Point for drives C: and D:. Next, I dismounted D: and used the Disk Management tool to remove the "D:" drive letter from the D volume. (My plan was to reboot and then redirect the symbolic link.) Upon reboot, I got the user profile error described above. Finally, I restored the System Restore Points that I'd created for both drives and then rebooted again. Same issue.

    Read the article

  • GnuPG Command Line - Verifying KeePass Signature

    - by Stisfa
    I'm trying to verify the PGP Signature of the latest version of KeePass 2.14's setup file against this signature, but this is the output I receive: C:\Program Files (x86)\GNU\GnuPG>gpg.exe --verify C:\Users\User\Desktop\KeePass-2.14-Setup.exe gpg: no valid OpenPGP data found. gpg: the signature could not be verified. Please remember that the signature file (.sig or .asc) should be the first file given on the command line. C:\Program Files (x86)\GNU\GnuPG> I found this command here, but it made no mention about ".sig" or ".asc" files, so I figured I did something wrong. By reading (http://www.gnupg.org/documentation/manuals/gnupg/gpgv.html#gpgv), I further tried the following: C:\Program Files (x86)\GNU\GnuPG>gpg.exe --pgpfile C:\Users\User\Desktop\KeePass-2.14-Setup.exe gpg: Invalid option "--pgpfile" C:\Program Files (x86)\GNU\GnuPG> As you can see, the results are quite obfuscating... I took a look at this on SuperUser (http://superuser.com/questions/16160/short-easy-to-understand-explanation-of-gpg-pgp-for-nontechnical-people - I couldn't use "a href" due to the built in spam filter that discriminates against users with < 10 rep; this is the same reason for the link above this link), but none of the links seemed to really address my question, at least not directly enough for me to get any idea on how to move forward on this. Can anybody here help me with the esoteric technicality of OpenPGP & the associated use of the GnuPG program? I've felt pretty dumb learning VBS, but this is beyond humiliating: it's absolutely debilitating and maiming whatever confidence I had with my IT skills (then again, I have no justification for making any boast either, as I have yet to get my A+ Cert, lol).

    Read the article

  • Accounting setup in freeradius with mikrotik and the "always" module

    - by Matt
    I have a freeradius setup that is being used to provide authentication for users on a wireless network. The access points are all Mikrotik hardware and the users are connected 24/7. We've been using Daloradius with mysql and freeradius 2. The boss wants to use the accounting information and while this is all set up and appears to be working, I've found that not all the accounting information is present. Since our users may be connected for more than 24 hours at a time we keep this in here, it will reset some attributes daily so that the accounting packets work correctly. So he started poking around at this link: http://wiki.mikrotik.com/wiki/RouterOs_MySql_Freeradius#Configuring_RouterOs_for_Radius_.26_PPP.2A_AAA And was looking specifically at the following section. Since our users may be connected for more than 24 hours at a time we keep this in here, it will reset some attributes daily so that the accounting packets work correctly always fail { rcode = fail } always reject { rcode = reject } always ok { rcode = ok simulcount = 0 mpp = no } However, that link references freeradius 1 and I can't find this in the radius.conf file for freeradius 2. What does it do and could it be a reason I'm missing data? EDIT: I have found one issue. We have a backup freeradius server that is also receiving the accounting packets. Although they are replicating, it's only a master/slave configuration. If the slave receives accounting packets it won't replicate them back to the master. Although I suspect this might solve it, the boss is not convinced due to the always module. Is there anything special I need to configure in the mikrotik AP's or freeradius 2 for clients connected 24/7.

    Read the article

< Previous Page | 326 327 328 329 330 331 332 333 334 335 336 337  | Next Page >