Search Results

Search found 4426 results on 178 pages for 'bunch'.

Page 119/178 | < Previous Page | 115 116 117 118 119 120 121 122 123 124 125 126  | Next Page >

  • Identify differences between MP3 files

    - by Thingomy
    I have 2 old similar directory trees with MP3 files in them. I am happily using tools like diff and Rsync to identify and merge the files that are only present on one side, or are identical, I'm left with a bunch of files that are bitwise different. On running diff over a pair actually different files, (with -a tag to force text analysis) it produces incomprehensible gibberish. I have listened to files from both sides, and they both seem to play fine (but at nearly 10 minutes per song, when listening to them twice each, I haven't done many) I suspect the differences are due to some player in the past "enhancing" my collection by messing about with ID3 tags, but I can't be certain. Even if I identify differences in ID3 tags, I would like to confirm that no cosmic ray or file copy error issues have damaged any of the files. One method that occurs to be is finding the byte locations of the differences, and ignoring all changes in the first ~10kb of each file, but I don't know how to do this. I have on the order of a hundred or so files that differ across the directory tree. I found How to compare mp3, flac audio data in a file, ignoring header data (ID3 tag) etc.? -- but I can't run alldup due to being Linux only, and from the sounds of it, it would only partially solve my issues anyway.

    Read the article

  • How do I compile DarWINE in PowerPC Mac OS X 10.4.11...?

    - by Craig W. Davis
    So far I've tried using MacPorts which gives me this error: /Error: Cannot install wine for the arch(s) 'powerpc' because Error: its dependency pkgconfig is only installed for the archs 'i386 ppc'. Error: Unable to execute port: architecture mismatch To report a bug, etc... (I'm not allowed to post two links due to being a new poster...)./ I've also tried using the build script I found in the DarWINE 0.9.12 SDK download that I found on the DarWINE SourceForge.net Project Page... I've also tried the build script that I found at http://code.google.com/p/osxwinebuilder/#Building_Wine_via_the_script... None of these attempts to build DarWINE have actually worked. Whenever I build using the DarWINE build script I run it as follows: /1. I decompress the WINE tarball into ~/Downloads/WINE 2. I cd into ~/Downloads/DarWINE. 3. I run ./winemaker ~/Downloads/WINE/wine-1.2.2 or ./winemaker ~/Downloads/WINE/wine-1.2-rc2 (the reason for trying WINE 1.2-rc2 is that some people managed to get it to build on PowerPC Macs running 10.5.8...)./ I made sure to install Xcode Tools 2.5 & all the SDKs too... The net result is either a syntax type error resulting from trying to run the checked out Google Code DarWINE build script or a bunch of make errors when trying to run the official DarWINE build script that I forcefully extracted from the DarWINE 0.9.12 SDK .dmg file by using Pacifist. I trying to build DarWINE on mid-April 2006 1.42 GHz eMac with DL SuperDrive with Bluetooth 2.0+EDR with 2 GBs of RAM running 10.4.11 as I mentioned earlier... (it came with 10.4.4 on the Mac's Restore DVD-ROM that I ordered from 1-800-SOS-APPL & coconutIdentityCard told me it was made on April 12th 2006 & I know that's right because when I reinstalled Mac OS X 10.4.4 it displayed that it was registered/previously owned by a Hawaiian school...): /make[1]: winegcc: Command not found make[1]: * [main.o] Error 127 make: * [dlls/acledit] Error 2./

    Read the article

  • Synchronizing files between Linux servers, through FTP

    - by Daniel Magliola
    I have the following configuration of servers: 1 central linux server, a VPS 8 satellite linux servers, "crappy shared hostings" I have a bunch of files that I need to have in all servers. Right now i'm copying them everywhere manually, but I want to be able to copy them to the central server, and then have a scheduled process that runs every now and then and synchronizes them (only outwardly, no need to try to find "new" files in the satellite servers). There are a couple of catches though: I can't have any custom software in the satellite servers, or do strange command line things that'll auto connect to them and send the files directly. I know this is the way these kinds of things are normally done, but the satellite servers are crappy shared hosting ones where I have absolutely no control over anything. I need to send the files over FTP I also need to have, in my central server, a list of the files that are available in each of the satellite servers, to make sure they are ready before I send traffic to them. If I were to do this manually, the steps would be: get the list of files in a satellite server compare to my own, and send the files that are missing get the list of files again, and store it in my central database. I'd like to know what tools are out there that can alleviate as much of this as possible, first the syncing, and then the "getting the list of files available in the other server". I'm going to be doing everything from PHP, not sure if there are good tools to "use FTP from PHP", which i'm pretty sure i'll have to do for step 3 at least. Thanks in advance for any ideas! Daniel

    Read the article

  • Expanding to dual video cards

    - by Anthony Greco
    I know a lot of factors can go into play here, so I will list my current hardware and setup: MOBO: GIGABYTE GA-890FXA-UD5 [http://www.newegg.com/Product/Product.aspx?Item=N82E16813128441] Processor: AMD Phenom II X6 1090T Black Edition Thuban 3.2GHz [http://www.newegg.com/Product/Product.aspx?Item=N82E16819103849] Ram: G.SKILL Ripjaws Series 16GB (4 x 4GB) [https://secure.newegg.com/NewMyAccount/OrderHistory.aspx?RandomID=4933910872745320111128011418] Current video card: EVGA 01G-P3-1366-TR GeForce GTX 460 SE [http://www.newegg.com/Product/Product.aspx?Item=N82E16814130591] OS: Windows 7 Ultimate x64 Currently I can run 2 monitors just fine in my setup. However, I want to upgrade this to 4 monitors. My question is, what is the best way to do this? I remember in the past reading I need the same type of video card, however would any GeForce GTX work, or do i need that very specific model (EVGA 01G-P3-1366-TR GeForce GTX 460 SE)? Are there any issues I should be aware of before I order 2 new monitors and a video card? Are there video cards better setup for this? I know NVidia offers SLI, however I do not know if my mobo is compliant. My mobo also offers CrossFireX configuration, though from what it says only Radeon are compliant. Any suggestions / feedbacks on my best route with my current setup is appreciated. Even if you suggest buying 2 new identical video cards, as long as you can mention which and why that is better I really appreciate it. Note: I really do not do any gaming. I sometimes do some 3D work in Unity and very rarely in Maya. Besides that I mostly do all my computer work in Visual Studios and Photoshop. I however need the 2 extra monitors because I monitor sometimes 5 remote desktops at once and switching on only 2 is becoming a very big pain. Also seeing 3 side by side while I work on the 4th will be very helpful. Again, I appreciate any feedback, as I have googled a bunch and just want to make sure what I buy will work.

    Read the article

  • Firefox crash on first load on Ubuntu Linux on Windows Laptop

    - by Ira Baxter
    I've had a Dell Latitude laptop since about 2000 without managing to destroy it. A month ago the Windows 2000 system on it did something stupid to its file system and Windows was completely lost. No point in reinstalling Windows 2000, so I installed an Ubuntu Linux on the laptop. Everything seems normal (installed, rebooted, I can log in, run GnuChess, poke about). ... but ... when I attempt to launch Firefox from the top bar menu icon, I get a bunch of disk activity, the whirling cursor icon goes round a bit and then everything stops: disk, icon, mouse. Literally nothing happens for 5 minutes. Ubuntu is dead, as far as I can tell. A reboot, and I can repeat this reliably. So on the face of it, everything works but Firefox. That seems really strange. The only odd thing about this system when Firefox is booting is that while it has an Ethernet port (that worked fine under Windows), it isn't actually plugged into an Ethernet. As this is the first Firefox boot since the Ubuntu install, maybe Firefox mishandles Internet access? Why would that crash Ubuntu? (I need to go try the obvious experiment of plugging it in).

    Read the article

  • Troubleshoot dropped wireless connections

    - by Jack
    I was recently hired in the IT department of a small company (~180 users) and one of the issues that people have been complaining about is having their wi-fi connections drop during meetings. The company is using an HP ProCurve Wireless LAN with 10 APs and a controller unit located in the server room. I don't have any experience troubleshooting WLAN in a multi-AP environment, so I'm trying to at least gather information using free or cheap tools. I did a basic site survey using the free version of Ekahau HeatMapper and discovered the following in one of the conference rooms that has been a problem. The program picked up three access points (plus a bunch of others with much lower signals that were out of range): AP 1: SSID: "Unknown SSID" - Signal strength: -48 dBm - -40 dBm. Channel: 2 AP 2: SSID "CompanyMain" - Signal strength: -35 dBm or greater. Channel: 2. Security: WEP (This is the main SSID for the company's WLAN.) AP 3: SSID: "CompanyGuest" - Signal strength: -40 dBm - -35 dBm. Channel: 2. Security: WPA2 (This SSID is the company's "guest" WLAN, which was setup to allow Internet access, but prevent network access.) Is there anything that you see that is clearly a problem from the above? I'm assuming that the unknown SSID might be a big problem, and that it is an AP from a neighboring office that is causing interference. Does that seem likely? Also, regarding channel, should we try changing the channels of our APs to avoid interference with that unknown SSID? (Since everything seems to be on Channel 2?) Should our APs be on different channels? In other words, should the CompanyMain and CompanyGuest APs be on different channels? Finally, any recommendations for free/cheap tools to help me figure this out, and/or a good methodology to follow? Thanks in advance for any help. Jack

    Read the article

  • Firefox crash on first load on Ubuntu Linux on older Dell Laptop

    - by Ira Baxter
    I've had a Dell Latitude laptop since about 2000 without managing to destroy it. A month ago the Windows 2000 system on it did something stupid to its file system and Windows was completely lost. No point in reinstalling Windows 2000, so I installed an Ubuntu Linux on the laptop. Everything seems normal (installed, rebooted, I can log in, run GnuChess, poke about). ... but ... when I attempt to launch Firefox from the top bar menu icon, I get a bunch of disk activity, the whirling cursor icon goes round a bit and then everything stops: disk, icon, mouse. Literally nothing happens for 5 minutes. Ubuntu is dead, as far as I can tell. A reboot, and I can repeat this reliably. So on the face of it, everything works but Firefox. That seems really strange. The only odd thing about this system when Firefox is booting is that while it has an Ethernet port (that worked fine under Windows), it isn't actually plugged into an Ethernet. As this is the first Firefox boot since the Ubuntu install, maybe Firefox mishandles Internet access? Why would that crash Ubuntu? (I need to go try the obvious experiment of plugging it in).

    Read the article

  • How to install Red Hat Enterprise Linux on Apple Macbook Pro MacBookPro4,1

    - by Todd V. Rovito
    I have a one year old Mac Book Pro that I am trying to get RHEL 5.4 installed on via bootcamp. No matter what I do I can't get the installer to boot. I have tried multiple DVD's and even verified the install works on a new Mac Book Pro. Most of the time the installer simply locks up. I usually use Linux text with all-generic-ide on the boot line. I removed the ide parameter and I just used linux text. The results I get are that a bunch of kernel messages appear then the background turns blue and a thin text box pops up saying its loading ata..... something it disappears too fast for me to read. Then the machine freezes. I pressed the alt function keys to see if I could look at the system log, here is what it says: Alt-f3 says "trying to mount CD device hda" Alt-f4 says status error: hda: lastFailedSense Hda: Failed opcode was: unknown Hda: Lost interrupt Hda: Drive not ready for command Ide-cd: command 0x3 timed out Above this junk it looks like it found the partition because it knew it was 20 GB and listed as /dev/sda3. I think it has something to do with the CD drive, is that possible? Thanks again for the support. PS I posted in the apple support forums ( Apple.com Support Discussions Boot Camp Installation and Storage) and didn't get an answer.

    Read the article

  • RDP problem with Vista and Windows 7 destination

    - by MadBison
    I use a server a home to host a bunch of concurrently running Hyper-V VM's with different OS's and software for testing. I have Vista on the laptop, all latest SP's and patches. The server is Server 2008 R2, fully patched. The guests are a mix of XP, Vista, Server 2008 and Windows 7. If I connect to the Win XP or Server 2008 guest using RDP, it is always good. Very quick, no speed issues. If I connect to the Vista or Win 7 guests, the response time is so slow it is unusable. Usually 6 or 8 seconds, and at times it is to long to measure! This happens from both the laptop running Vista, and the server running Server 2008 R2. Does anyone know what the issue is with RDP on Vista and Windows 7 destinations? I did read this: http://blog.tmcnet.com/blog/tom-keating/microsoft/remote-desktop-slow-problem-solved.asp and that is not the problem I have applied that change to all PC's.

    Read the article

  • how do you add an A record for a root domain

    - by nbv4
    this seems really simple, but I can't figure it out. I'm using xname.org since it's free and I own a bunch of domains spread over a few different registrars. The setup I desire is very simple: one A record that points the bare domain name to my server, plus a wildcard CNAME record pointing all subdomains to the same server. So if the user goes to domain.com it will point them to 285.24.435.75, if they go to www.domain, blah.domain.com, or any other sub domain, they'll get sent to 285.24.435.75. All the examples I read on the internet about setting up A records all have the A record set to a subdomain such as www. WWW is deprecated so I want to have noting to do with it. Currently my xname.org zone looks like this: $TTL 86400 ; Default TTL domain.com. IN SOA ns0.xname.org. nbvfour.gmail.com. ( 2010052503 ; serial 10800 ; Refresh period 3600 ; Retry interval 604800 ; Expire time 10800 ; Negative caching TTL ) $ORIGIN domain.com. IN NS ns2.xname.org. IN NS ns0.xname.org. IN NS ns1.xname.org. @ IN A 65.49.73.148 * IN CNAME domain.com The '@' symbol is something that the godaddy domain interface uses to mean "this root domain', but that may have been specefic to that interface and has no meaning here. Before I had a 'www' entry in the A rcords and it worked in the sense that I could ping 'www.domain.com' and it returned a response, but pinging the root domain 'domain.com' returned "no host found".

    Read the article

  • Resolving "not found" messages after doing ./configure building node.js

    - by duke
    Hello I am trying to install node.js on debian AMD64. I got node.js from git. When I do ./configure a bunch of "checking for program" messages say "not found". I want to resolve all these and ensure everything needed is present. Can anyone suggest what I need to do to resolve the "not found" messages? Thanks heaps. server:/devel/node# ./configure Checking for program g++ or c++ : /usr/bin/g++ Checking for program cpp : /usr/bin/cpp Checking for program ar : /usr/bin/ar Checking for program ranlib : /usr/bin/ranlib Checking for g++ : ok Checking for program gcc or cc : /usr/bin/gcc Checking for gcc : ok Checking for library dl : yes Checking for library execinfo : not found Checking for openssl : not found Checking for function SSL_library_init : yes Checking for header openssl/crypto.h : yes Checking for library rt : yes --- libeio --- Checking for library pthread : yes Checking for function pthread_create : yes Checking for function pthread_atfork : yes Checking for futimes(2) : yes Checking for readahead(2) : yes Checking for fdatasync(2) : yes Checking for pread(2) and pwrite(2) : yes Checking for sendfile(2) : yes Checking for sync_file_range(2) : yes --- libev --- Checking for header sys/inotify.h : yes Checking for function inotify_init : yes Checking for header sys/epoll.h : yes Checking for function epoll_ctl : yes Checking for header port.h : not found Checking for header poll.h : yes Checking for function poll : yes Checking for header sys/event.h : not found Checking for header sys/queue.h : yes Checking for function kqueue : not found Checking for header sys/select.h : yes Checking for function select : yes Checking for header sys/eventfd.h : not found Checking for SYS_clock_gettime : yes Checking for library rt : yes Checking for function clock_gettime : yes Checking for function nanosleep : yes Checking for function ceil : yes Checking for fdatasync(2) with c++ : yes 'configure' finished successfully (1.479s) server:/devel/node#

    Read the article

  • Is browser based wireless authentication secure?

    - by johnnyb10
    Our wireless network previously used a preshared WPA/WPA2 key for guest access, which allows them access to the Internet. (Our employee access uses 802.1x authentication). We just had a wireless consultant come in to fix various wireless issues we had; one of the things he wound up doing was changing our guest access to HTML-based instead of the preshared key. So now that guest SSID is open (instead of using WPA) and users are presented with a browser-based login screen before they can get on the Internet. My question is: Is this an acceptable method from a security standpoint? I would assume that having an open network is necessarily a bad idea, but the consultant said that the traffic is still using PEAP, so it's secure. I didn't get a chance to question him further on this because we ran late and a bunch of other things came up. Please let me know what you think about the advantages/disadvantages of using HTML-based wireless authentication as opposed to using a preshared WPA key. Thanks...

    Read the article

  • Hardware needed to route between two networks over wireless

    - by AptDweller
    I recently rented an apartment about 100 yards from my brother's house. I have line of sight to his house and can pick up his home AP signal with one of my two laptops if I go out on my balcony (facing his house) or put the laptop by the window. The other laptop will sometimes see the SSID broadcast, but fails to connect, drops, etc. We would like to set up a persistent wireless connection between our homes. We would prefer each network be logically segmented as independent networks, but he will share his internet connection. I've got a bunch of tv shows saved to a NAS by my TiVO that I'd like to make available to him across the wireless link. My brother strongly prefers to not mess with his WAP at all. His network is running fine and is afraid to mess it up. I guess you could say he is "technologically declined". If we can get a reliable 11Mbps connection we will be satisfied. What hardware do I need to make this work? I was thinking a router with two wireless interfaces (external antennas) a wired interface, and a directional antenna mounted on my balcony facing his house. Can anyone recommend hardware to make this happen? Cheaper is better. I'll only be living in the area a year or two. I do have an old satellite TV antenna if that can be used to direct the signal.

    Read the article

  • Performance: Nginx SSL slowness or just SSL slowness in general?

    - by Mauvis Ledford
    I have an Amazon Web Services setup with an Apache instance behind Nginx with Nginx handling SSL and serving everything but the .php pages. In my ApacheBench tests I'm seeing this for my most expensive API call (which cache via Memcached): 100 concurrent calls to API call (http): 115ms (median) 260ms (max) 100 concurrent calls to API call (https): 6.1s (median) 11.9s (max) I've done a bit of research, disabled the most expensive SSL ciphers and enabled SSL caching (I know it doesn't help in this particular test.) Can you tell me why my SSL is taking so long? I've set up a massive EC2 server with 8CPUs and even applying consistent load to it only brings it up to 50% total CPU. I have 8 Nginx workers set and a bunch of Apache. Currently this whole setup is on one EC2 box but I plan to split it up and load balance it. There have been a few questions on this topic but none of those answers (disable expensive ciphers, cache ssl, seem to do anything.) Sample results below: $ ab -k -n 100 -c 100 https://URL This is ApacheBench, Version 2.3 <$Revision: 655654 $> Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/ Licensed to The Apache Software Foundation, http://www.apache.org/ Benchmarking URL.com (be patient).....done Server Software: nginx/1.0.15 Server Hostname: URL.com Server Port: 443 SSL/TLS Protocol: TLSv1/SSLv3,AES256-SHA,2048,256 Document Path: /PATH Document Length: 73142 bytes Concurrency Level: 100 Time taken for tests: 12.204 seconds Complete requests: 100 Failed requests: 0 Write errors: 0 Keep-Alive requests: 0 Total transferred: 7351097 bytes HTML transferred: 7314200 bytes Requests per second: 8.19 [#/sec] (mean) Time per request: 12203.589 [ms] (mean) Time per request: 122.036 [ms] (mean, across all concurrent requests) Transfer rate: 588.25 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 65 168 64.1 162 268 Processing: 385 6096 3438.6 6199 11928 Waiting: 379 6091 3438.5 6194 11923 Total: 449 6264 3476.4 6323 12196 Percentage of the requests served within a certain time (ms) 50% 6323 66% 8244 75% 9321 80% 9919 90% 11119 95% 11720 98% 12076 99% 12196 100% 12196 (longest request)

    Read the article

  • how to back up data from a machine that keeps hanging

    - by Amit Phatarphekar
    Hello - I have a storage server running opensolaris. But lately its been acting up - it hangs at random times due to some SCSI/ATA related error messages. I've tried to fix it without any progress, so I'm giving up now. The machine keeps hanging every 30 minutes or 1 hr ...sometimes after 4 hrs. Its very unpredictable. So I've decided to just reformat the storage server and start from scratch...maybe I'll just not use solaris and install something else, since the errors are related to solaris running on ATA HDD or something. Question - Before I reformat it, I want to back up some of the important data on it. Like it has a VM with 200 GB disk files, it has a whole bunch of ISOs stored on it etc etc. I'm using a simple scp to copy the files over to a different machine. My issue is that, because the machine hangs....sometimes my file copy is incomplete and I have to start all over again. Lets say I'm trying to copy a 200GB file which takes like 4 hrs....IF the machine hangs before the whole file i copied over...I have to recopy the file from scratch. Is there a solution to copy the files over such that if the machine hangs or network goes down..the copying can resume from where it left off? - like if 50 GB of a 200GB file was copied and machine hung....next time, it'll just continue to copy rest of the amount, instead of starting all over again. Thanks Amit

    Read the article

  • Windows 7, network shares, and authentication via local group instead of local user

    - by Donovan
    I have been doing some troubleshooting of my home network lately and have come to an odd conclusion that I was hoping to get some clarification on. I'm used to managing share permissions in a domain environment via groups instead of individual user accounts. I have a box at home running windows 7 ultimate and I decided to share some directories on that machine. I set it up to disallow guest access and require specifically granted permissions. (password moe?). Anyway, after a whole bunch of time i figured out that even though the shares I created were allowed via a local group i could not access them until i gave specific allowance to the intended user. I just didn't think i would have to do that. So here is the breakdown. Network is windows workgroup, not homegroup or nt domain PC_1 - win 7 ultimate - sharing in classic mode - user BOB - groups Admins PC_2 - win 7 starter - client - user BOB - groups admins PC_3 - win xp pro - client - user BOB - groups admins the share on PC_1 granted permission to only the local group administrators. local user BOB on PC_1 was a member of administrators. Both PC_2 and PC_3 could not browse the intended share on PC_1 because they were denied access. Also, no challenge was presented. They were simply denied. After adding BOB specifically to the intended share everything works just fine. Remember, its not an nt domain just a workgroup. But still, shouldn't i be able to manage share permissions via groups instead of individual user accounts? D.

    Read the article

  • How to remove static IP from Mitel 5312 and enable DHCP

    - by jimbo
    I'm not sure this is the right forum for this question -- although I'm confident I'll be told if not! -- but I've read the fine manual (at least, such a manual as I have), I've googled and I cannot get any insight into where to even start solving this problem. I have a bunch of Mitel 5312 handsets, talking to a 3300 ICP controller. Some handsets are at a remote location, get an address from my DHCP server over there, and use the Mitel "Teleworker" extension to connect in over the Internet. The remaining handsets were set up with static IPs by a BT-supplied engineer, on the same subnet as the ICP itself. So far, so good. I have one remaining teleworker licence, and need to move a handset from the home location to the remote. I've managed to boot it and configure teleworker, but I cannot for the life of me see where I tell it to forget its static IP, and make a DHCP request. Any ideas? Should I be looking on the controller, or holding magic combinations of buttons on the handset itself? EDIT: Following some advice from Robert, below, I've broken out a spare device and reassigned the profile for this user's extension to the MAC of the new phone, and a new profile to the old MAC. Unfortunately this still doesn't get me anywhere -- the new handset now asks for the teleworker install password. I suspect I'm going to have to get a Mitel engineer involved here, since I've never been given that password... Unless anyone has any great ideas?

    Read the article

  • openSSL tutorial not fully working - Can sign but cannot restore original file

    - by djechelon
    I'm writing, and testing, a little tutorial for my groupmates involved in an openSSL homework. We have a bunch of PDF files, I'm the CA and each one should send me a signed PDF for me to be verified. I've told them to do the following (and tried to do it by myself) Request and obtain a certificate (I'll skip this part) Create a MIME message with the PDF file in it makemime -c "text/pdf" -a "Content-Disposition: attachment; filename=”Elaborato.pdf" Elaborato.pdf > Elaborato.pdf.msg Sign with openSSL openssl smime -sign -in Elaborato.pdf.msg -out Elaborato.pdf.p7m -certfile ca.pem -certfile nomegruppo.crt -inkey nomegruppo.key -signer nomegruppo.crt Verify with openssl smime -verify -in Elaborato.pdf.p7m -out Elaborato-verified.msg -CAfile ca.pem -signer nomegruppo.crt Extract attachment with munpack Elaborato-verified.msg View with Acrobat Reader The problem is that even if I get a file that (from its binary content) resembles a PDF file my current Ubuntu PDF viewer doesn't read it. The XXXElaborato.pdf extracted by munpack is a little bit smaller than the original. What's the problem with this procedure? In theory, they should send me the signed S/MIME message and I should be able to read the PDF within it. Why can't I restore the original content of the PDF file?

    Read the article

  • Cannot resolve Hostname to IP, but IP to hostname works

    - by blade
    Hi, I have deployed a bunch of windows server VMs on a cloud hosting service. These machines are all joined to a domain controller on the same service, which also hosts DNS. All of the domain-joined machines have dynamic IP (along with the DC). If I try to resolve any of the hostnames remotely, it fails. For example, I am in SQL Server Reporting Services and I need to connect to a remote server. I provide the hostname of the desired target server and this fails, but then if I provide the IP, this works. How can I pass the hostname and have this resolve to IP? Is there anything I need to look for in the DNS server? It has records of the hostnames (in forward lookup I think), but reverse is empty. Isn't it the case that forward lookup resolves ip to hostname and reverse resolves hostname to ip? Also, I don't know what he subnet mask because this is not in my control, so the machines may not be in the same subnet - can this be a cause of the problem? Where is the problem? Thanks

    Read the article

  • Problems with "Read Only" on a Samba share from Windows machines

    - by fistameeny
    Hi, We have a Ubuntu 10.04 Server that has a bunch of Samba shares on it that Windows workstations connect to. Each Windows workstation has a valid username/password to access the shares, which have restricted access governed by Samba. The problem we are experiencing is that Samba doesn't seem to be able to mimic the Windows way of handling "Read Only" attributes. Say I have two users, UserA and UserB, both a group called Staff - UserA creates a file that is readable/writeable by the group (ie. chmod rwxrwx---). If UserA then sets the "Read Only" flag, this changes the permissions to r-xr-x--- (i.e. no write for anyone). As UserB is in the same group as UserA, they should be able to remove the "Read Only" permission - however, they can't as Samba won't allow it. Is there a way to force Samba to allow users within the same group to remove the "Read Only" from a file not created by them? Edit: The Samba smb.conf is as follows: The share is defined in the smb.conf as: [global] log file = /var/log/samba/log.%m passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* . obey pam restrictions = yes map to guest = bad user encrypt passwords = true passwd program = /usr/bin/passwd %u passdb backend = tdbsam dns proxy = no netbios name = ubsrv server string = ubsrv unix password sync = yes os level = 20 syslog = 0 usershare allow guests = yes panic action = /usr/share/samba/panic-action %d max log size = 1000 pam password change = yes workgroup = workgroup [Projects] valid users = @Staff writeable = yes user = @Staff create mode = 0777 path = /srv/samba/Projects directory mode = 0777 store dos attributes = Yes The folder itself looks like this: ls -l /srv/samba/ drwxrwxrwx 2 nobody Staff 4096 2010-11-04 10:09 Projects Thanks in advance, Matt

    Read the article

  • Unexplained cache RAM drops on Linux machine

    - by FunkyChicken
    I run a CentOS 5.7 64 machine with 24gb ram and running kernel 2.6.18-274.12.1.el5. This machine runs only Nginx, php-fpm and Xcache as extra applications. Since about 3 weeks my memory behavior on this machine has changed and I cannot explain why. There are no crons running which flush anything like this. There are also no large numbers of files being deleted/changed during these drops. The 'cached' memory gets dropped about every few hours, but it's never a set gap between flushes, this indicates to me that some bottleneck gets reached instead. It also always seems to be when total memory usages gets to about 18GB, but again, not always exactly 18GB. This is a graph of my memory usage: As you can see in the graph the 'buffers' always stay more or less the same, it is mainly the 'cache' that gets dropped. Running vmstat -m I have outputted the memory usage just before and just after a memory drop. The output is here: http://pastebin.com/diff.php?i=hJqZqztm 'old version' being before, 'new version' being after a drop. About 3 weeks ago my server crashed during a heavy DDOS attack, after I rebooted the machine this odd behavior started. I have checked a bunch of logs, restarted the machine again, and cannot find any indication what changed. During these 'cache' memory drops, my iNode usage drops at the same time. Does anyone have any idea what might be causing this behavior? Clearly my RAM isn't full, so I am curious why this could be happening.

    Read the article

  • Provisioning DELL servers using only linux tools

    - by Krist van Besien
    We have taken delivery of a bunch off Dell Poweredge Rack servers. Unfortunately when ordering it was neglegted to ask for dhcp to be enabled in the built in iDRAC controlers... So they are all stuck on the same IP address. Which means that I'll have to go to each of them individually and configure a new IP in the console... In the future I want to avoid that. Now Dell proposes to deliver the next batch with auto discovery enabled. As I understand this means that when the machine wakes up for the first time the iDRAC will request a DHCP address. The DHCP server then supposedly also provides a "provisioning" server, that provides it with a username and password, and a configuration to be applied. This would allow us to for example configure things like RAID automatically. However, I can't seem to find a way to set up such a provisioning server that does not involve setting up a windows machine. I want to use Linux tools exclusively. Is there a way to do this? I want to just rack servers, switch them on, and then do everything remotely. And that using only linux tools?

    Read the article

  • master-slave datastore replication, automatic failover, and wackamole

    - by z8000
    I have 2 dedicated servers provisioned for my next project's datastores. The datastores are configured for master-slave replication. There's no inherent automatic failover but I of course want this. That is, I'd love for access to the master datastore to always just work without having to configure a client library to detect when a master is down and failover to the slave. I've seen Wackamole which is based on the Spread Toolkit. You provide Wackamole with a set of IPs and a bunch of nodes, and regardless of the up/down state of any of the nodes, those IPs will stay available/up. Wackamole detects when a node goes down and ARPs the IP(s) that were up on the now-down node. It's pretty neat actually. So, my thought was to use Wackamole to keep the 2 virtual private IPs available/up. Clients would then just always use the same private IP to access the master datastore and the same but distinct IP for the slave datastore, even if those IPs were hosted on the same node. My datastore servers are accessed over a private network. I am unsure if this messes with Wackamole though. Is this lunacy? How do you generally handle automatic failover of private services like a datastore. FWIW, it shouldn't matter but the datastore is Redis. I don't want to hear "use mySQL" please :) Thanks.

    Read the article

  • Possible to host CentOS netinstall files on a local HTTP/FTP?

    - by garlicman
    I'm running XenServer on an Dell R610 and am running into a catch-22. During install from DVD, CentOS can't find the DVD package catalogue. It's a reported error for some, XenServer + CentOS6 + DVD install in some hardware configurations = failed install. Yes, I checked the MD5 and let the disc test pass. In every reported case, the netinstall was the solution. The issue is my net access is required to go through a web proxy that prompts before you can download a file. This naturally breaks any download automation. I've been waiting on our IT to put in an exception rule to allow my lab to bypass the prompt, but it's been over 3 weeks now and they don't seem responsive. (I've been working on this a day or two a week) I want to try and host the netinstall files local in my Xen network. Right now I only have a bunch of Windows based VMs, CentOS won't install so I don't have any Linux tools. I had tried simply hosting all the DVD contents off one of the Windows servers using Mongoose. (I didn't want to setup IIS) I copied them to a hosted sub-directory similar to all the mirrors out there (e.g. http:///centos/6.2/os/i386/) with no auth or anything. Then in the netinstall I correctly pointed to it. I now realize just copying the DVD files over won't work. The repodata will point to a local device, not the site I'm hosting. (e.g. the DVD repodata includes xml that points to where the packages are) Clearly I'm hosting them over HTTP, not from a DVD. Is there an easy way to sort this out? I'm just trying to install CentOS6 on Xen. If there's a turnkey downloadable Xen image with CentOS 6.2 on it, or a downloadable repo image, I'll take that too! Thank you in advance!

    Read the article

  • Problems with "Read Only" on a Samba share from Windows machines

    - by fistameeny
    We have a Ubuntu 10.04 Server that has a bunch of Samba shares on it that Windows workstations connect to. Each Windows workstation has a valid username/password to access the shares, which have restricted access governed by Samba. The problem we are experiencing is that Samba doesn't seem to be able to mimic the Windows way of handling "Read Only" attributes. Say I have two users, UserA and UserB, both a group called Staff - UserA creates a file that is readable/writeable by the group (ie. chmod rwxrwx---). If UserA then sets the "Read Only" flag, this changes the permissions to r-xr-x--- (i.e. no write for anyone). As UserB is in the same group as UserA, they should be able to remove the "Read Only" permission - however, they can't as Samba won't allow it. Is there a way to force Samba to allow users within the same group to remove the "Read Only" from a file not created by them? Edit: The Samba smb.conf is as follows: The share is defined in the smb.conf as: [global] log file = /var/log/samba/log.%m passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* . obey pam restrictions = yes map to guest = bad user encrypt passwords = true passwd program = /usr/bin/passwd %u passdb backend = tdbsam dns proxy = no netbios name = ubsrv server string = ubsrv unix password sync = yes os level = 20 syslog = 0 usershare allow guests = yes panic action = /usr/share/samba/panic-action %d max log size = 1000 pam password change = yes workgroup = workgroup [Projects] valid users = @Staff writeable = yes user = @Staff create mode = 0777 path = /srv/samba/Projects directory mode = 0777 store dos attributes = Yes The folder itself looks like this: ls -l /srv/samba/ drwxrwxrwx 2 nobody Staff 4096 2010-11-04 10:09 Projects Thanks in advance, Matt

    Read the article

< Previous Page | 115 116 117 118 119 120 121 122 123 124 125 126  | Next Page >