Search Results

Search found 2736 results on 110 pages for 'semantic meaning'.

Page 79/110 | < Previous Page | 75 76 77 78 79 80 81 82 83 84 85 86  | Next Page >

  • GlassFish v3: Security related updates?

    - by chris_l
    I've used GlassFish v3.0 as my main development application server for a few weeks now. Now that I want to install it on my VPS, I'd like to get the latest security updates, because Glassfish v3 Release 3.0 (Open Source Edition or not) is already a few months old, and v3.1 is only available as "early access" nightlies (see https://glassfish.dev.java.net/public/downloadsindex.html). GlassFish offers an update mechanism (via pkg or updateTool), but when I simply try to get the latest updates (pkg image-update), it finds nothing. However, when I change the preferred publisher to dev.glassfish.org, I get a list with lots of updates. The interesting thing is, that I haven't been able to find any description about the exact meaning of the diverse publishers/repositories (release, stable, contrib and dev) anywhere on the web, most importantly answering the question: Am I supposed to use the "dev" repository for security updates, or is it (probably more likely) for unstable updates? Where do I get security updates from then? Or are there simply no security updates yet? Asking on the GlassFish forum resulted in 56 views, but 0 answers.

    Read the article

  • repair partition table

    - by m.sr
    Hallo. I've just overwritten my partition table of my system's hard disk. i made a cfdisk on the wrong device (/dev/sda instead of /dev/sdd), deleted all partitions, made one new primary spanning over the whole device, set its type to 07 (NTFS) and hit write. So here i am with my system running. Until i reboot, i hope/guess nothing will change - meaning: all my data is accessible (I'm currently making a dd-backup of the whole device and plan to make a .tar.gz-backup of the most important data later). I also backed up /proc/partitions, /proc/diskstats (even though i guess this is more about throughput and stuff like this ...) and /sys/block/sda/sda?/{start,size}. Some further things i know: 4 primary partitions 1st partition: ~100Mb, ext3, /boot 2nd partition: ~100Mb, "Win7 Boot Partition", ntfs(?) 3rd partition: ~20...30GB, Win7, ntfs 4th partition: ~20...30GB, luks-encrypted device The luks- de crypted device is a LVM-PV The /, /home & swap-partitions are all LVs on the (VG on the) above noted PV So my questions: What is the simplest way to just write the kernels partition table to the disk? What is the simplest way to take the above mentioned (and perhaps other I don't know of ...) data and generate the partition table? Are there any problems to take care of regarding to luks and/or lvm? Is there any data I should backup before rebooting (meanig stuff from kernel [ /sys/..., /proc/...] and so on, which could help me regenerate the partition table)? Thanks a lot! P.S.: debian sid, Kernel 2.6.34-1-amd64 from debian-experimental, 80GB Intel SSD

    Read the article

  • Software mirroring (RAID1) versus "Fake Raid" for new Windows 7 install

    - by kquinn
    I've just ordered two new hard drives for my main desktop and a copy of Windows 7 Professional 64-bit. I'd like to do a clean install of Win7 onto the new drives (leaving my old XP Pro boot partition around for a while in case something goes disastrously wrong, etc.). I want to have them set up in mirrored (RAID-1) mode. My understanding is that Win7 Pro can do software mirroring, but can I set this up directly at install time? If so, how? Note that I'd like the disk to be split into three partitions (OS/Apps&Data/Bulk data), all of which should be mirrored. Would it be better (more reliable or faster) to use my motherboard's hardware RAID support? My motherboard is an older nVidia nForce 680i SLI, which is not the most stable of motherboards, and I'm not sure how trustworthy its RAID1 configuration might be (or if Win7 could even detect and install onto a hardware-mirrored volume). Also, the performance characteristics of RAID1 are rather different than RAID0 or RAID5, and I'm wondering if Win7's software mirroring might actually be faster than hardware RAID1 (for example, I'm more of a Unix admin when I have to wear the sysadmin hat, and I've had great success deploying ZFS; most hardware RAID1 implementations have to read both disks and compare results to look for data errors, but ZFS can read from only one disk in the mirror and just use the built-in checksum, meaning it can have up to 2x the number of reads in-flight, as long as there's no data corruption). Edit: Okay, my question about whether Windows 7 can do software mirroring has been answered, and it can. I'm still unsure whether Windows software RAID or my motherboard's hardware "fake RAID" function is a better choice, though. Remember, I'm only interested in mirroring -- not the more complicated striping or parity operations that generally show the poor performance of crappy motherboard RAID solutions.

    Read the article

  • Using AT on Ubuntu to Background Downloads (w/ Queue)

    - by Nicholas Yost
    I am writing a PHP script, but I want to use the AT command in Ubuntu to fetch a remote file via WGET. I'm basically looking to background the process, so PHP can finish fairly quickly. I cannot find any questions on here about how to use both, but I basically want to do the following pseudo-code: <?php exec('at now -q queuename wget http://path.to/remote/file.ext'); ?> Additionally, I'd like to queue this between providers. I'd like to have each path.to have its own queue, so I only download one file from each provider at a time. Meaning: <?php exec('at now -q remote wget http://path.to/remote/file.ext /local/path'); exec('at now -q vendorone wget http://vendor.one/remote/file.ext /local/path'); exec('at now -q vendortwo wget http://vendor.two/remote/file.ext /local/path'); exec('at now -q vendorone wget http://vendor.one/remote/file.ext /local/path'); ?> This should download the files from path.to, vendor.one, vendor.two immediately, and when the first file is finished downloading from vendor.one, it starts the second file. Does that make sense? I can't find anything like this anywhere on the web, much less on SO/SF. If we can use the crontab to run a one-off wget command, thats fine too.

    Read the article

  • Can I have damaged a CPU ?

    - by Pascal
    Hey guys, This is a question I asked on Anantech forums, but I just discovered this community and think this question would fit right in. Here goes: I built a PC around a Q9550 1 1/2 years ago (MoBo is an Asus P5Q deluxe). Specs give 2.83GHz, OC'ed it to 3.40 GHZ without any problem (or so I thought) till 2 months ago. Cooling is provided by stock Intel Fan... 2 Months ago, I began to see random crashes, bios saying CPU overheat error... PC would reboot at OC'ed speed without any problem. Since last saturday and a few more crashes, PC won't reboot at 3.40 GHz, and even at stock speed (2.83 GHz), I got core temperatures of (60 C idle, 95 C under load) on the first two cores. This is the 4 core temperatures I am talking about, not the T-CPU which obvioulsy is lower. Fan is running at a steady 2000 RPM. Questions for you : 1. Is 2000 RPM the normal speed of the Intel fan or is my fan somehow broken (which could explain the overheating). In this case, any recommendation for a good fan for OCing ? 2. Hypothesis I fear is the right one: can the CPU have been slowly damaged over time by this OCing, meaning there is nothing much to do except waiting for it to die ? (As a side note, I am surprised that the 9550 is still around 300 $CDN here... Thought it would have been cheaper with all those i3/i5/i7 around). Any help or advice would be more than welcome...

    Read the article

  • Warning for "Reallocated Event Count" S.M.A.R.T. attribute with a new/unused drive. How serious is t

    - by Developer Art
    I've just looked at the health status of my old 2,5 inch 500 Gb Fujitsu drive with a popular "HD Tune" utility. It shows a warning for the "Reallocated Event Count" property. How serious is that? The thing is that the drive is practically new. I pulled it out of a new laptop over a year ago and never used it since. Right now it only has 53 "Power On" hours which sounds about right since I only had it running a few evenings overnight before switching it for something more performant. Does this warning indicate that the drive is likely to fail some time in the future? I'm somewhat perplexed since the drive is effectively unused. What is more, I have arranged with somebody to buy off this drive since I don't really need. It is 12,5 mm thick (with 3 plates) meaning it doesn't fit into an external enclosure which makes it quite useless to me. Can I give away the drive without having it on my conscience or better cancel the deal? In other words, can the drive be used safely for years to come or better throw it away? I'm running a sector test now to see if there are any real problems. Will post the results as soon as they're available.

    Read the article

  • Apache2 slow serving static while healthy

    - by user45339
    My Apache status looks like; 201 requests/sec - 98.8 kB/second - 504 B/request 85 requests currently being processed, 345 idle workers _____CCW_C_____C__C__C_R____C_WC_________C__C____CW__C__CCC_____ __C____W______C___C___CW__C_C______C__W_C__C_____CCC____C______R CC_C_______C___C____C______________C______C__C________________C_ ___________________C______________________C_______C___C_____C___ CC____C__C___R_____C_C_CC__________C___C___________R____C_C_C___ ______C______W_W__W___C____________________C__WCC__R__R_C_______ R__RC________________________C___R____W__C____.................. .................................................... Server load is average 2 on a 4 core machine. IO utilization is 10-15% and doesn't have many jumps over 70%. Machine has almost 4 gb free and uses 0 swap. The site on the machine is a PHP site. All PHP code is optimized and fast mostly when it gets accessed, however sometimes requests get stuck. Stuck meaning; no response for at least 10 sec. We debugged the PHP code, but it is quite optimal and fast. We spend a lot of time on it until we decided to test the requesting of: <html><body>test</body></html> test.html page. This static resource also gets 'stuck' in the same manner the php pages get 'stuck'. How is the possible given the health of the system? I tested the network, but, when the PHP shows 'slowness' in the site monitoring, the html test files also take (far longer) than 10 sec to load using; time lynx -dump http://127.0.0.1/test.html We are kind of desperate to solve this problem, but we cannot seem to tackle it.

    Read the article

  • Formula to calculate probability of unrecoverable read error during RAID rebuild

    - by OlafM
    I need to compare the reliability of different RAID systems with either consumer or enterprise drives. The formula to have the probability of success of a rebuild, ignoring mechanical problems, is simple: error_probability = 1 - (1-per_bit_error_rate)^bit_read and with 3 TB drives I get 38% probability to experience an URE (unrecoverable read error) for a 2+1 disks RAID5 (4.7% for enterprise drives) 21% for a RAID1 (2.4% for enterprise drives) 51% probability of error during recovery for the 3+1 RAID5 often used by users of SOHO products like Synologys. Most people don't know about this. Calculating the error for single disk tolerance is easy, my question concerns systems tolerant to multiple disks failures (RAID6/Z2, RAIDZ3 and RAID1 with multiple disks). If only the first disk is used for rebuild and the second one is read again from the beginning in case or an URE, then the error probability is the one calculated above squared (14.5% for consumer RAID5 2+1, 4.5% for consumer RAID1 1+2). However, I suppose (at least in ZFS that has full checksums!) that the second parity/available disk is read only where needed, meaning that only few sectors are needed: how many UREs can possibly happen in the first disk? not many, otherwise the error probability for single-disk tolerance systems would skyrocket even more than I calculated. If I'm correct, a second parity disk would practically lower the risk to extremely low values. Am I correct?

    Read the article

  • AWS document on number of databases allowed on an Amazon RDS instance

    - by user35042
    At the Amazon RDS FAQ there is the question "What is a database instance (DB Instance)?". The entire answer (as of mid-June 2012) is: You can think of a DB Instance as a database environment in the cloud with the compute and storage resources you specify. You can create and delete DB Instances, define/refine infrastructure attributes of your DB Instance(s), and control access and security via the AWS Management Console, Amazon RDS APIs, and Command Line Tools. Multiple MySQL databases or SQL Server databases (up to 30) or Oracle database schemas can be created on a given DB Instance. The last part of that quote, "Multiple MySQL databases or SQL Server databases (up to 30) Oracle database schemas" I interpret to mean that you can have an "unlimited" number of databases on an RDS MySQL or Oracle instance but only 30 databases on an MS SQL Server instance ("unlimited" meaning not limited by the RDS infrastructure itself). This was asked in the Stackoverflow question Does Amazon RDS support multiple databases per instance?. The answer quoted an older version of the FAQ. What I am looking for is an Amazon document that clarifies this question, or else someone who has experience using Amazon RDS who can attest what the situation actually is.

    Read the article

  • IIS6: How to troubleshoot a 404 error in an ASP.NET application?

    - by Tomalak
    I have an ASP.NET application on a Windows Server 2003/IIS6 that refuses to run for some reason (it's the Xerox Centre, if that info helps). It has been working flawlessly before though on this server. Now, all I get if I try to open the app homepage (http://some.intranet.server/XeroxCentreWareWeb/) is a "404 - File or directory not found" error. The app is configured to run in it's own app pool, which runs as Network Service. The Network Service account has read access to the configured directory. If I stop the app pool, I get the expected "Service Unavailable" message, meaning the app and its pool are wired correctly I tried to track down any file permission issues with procmon - nothing to be seen. There isn't even an access to the web app directory happening when the page loads. Interestingly, according to procmon, the web server accesses the 401-2 custom error file (Logon failed due to server configuration) first, but then decides to send the 404 down to the client. EDIT: The app runs with Windows-integrated authentication. Regular users have access to the app directory as well (I would have noticed file system "ACCESS DENIED" messages in procmon, if there had been any.) This makes me think that there is some kind of weird permission problem that occurs even before the application files are being accessed. I just have no idea where to look. I've tried to run the app pool as Local System for a test, but to no avail. What else could I check in this case?

    Read the article

  • Apache and linux file permissions

    - by morpheous
    I recently moved a Symfony 1.3.2 website (a PHP web framework), from a windows machine to Linux (Ubuntu 9.10). Ever since then, I have had all kinds of problems involving file permission (even though the app run without any of these problems on windows). I run symfony fix-perms which applied a 777 mask to the web directory (presumably, including its sub folders) - (as an aside) I think that is a potential security hole ... I have been meaning to come in here to ask how to correctly set permissions. Currently, when attempting to save a file from my website, I am getting the following error: PHP Warning: imagejpeg() [0function.imagejpeg0]: Unable to open '/home/morpheous/work/webdev/frameworks/symfony/sites/project1/web/uploads/../images/thumbnail/959cd604cf6115014a3703bef5a50486a5520642.jpg' for writing: Permission denied in /home/morpheous/work/webdev/frameworks/symfony/sites/project1/apps/frontend/lib Here are the permissions on the folders: web drwxr-xr-x 16 morpheous morpheous 4096 2010-02-24 21:01 web web/uploads/../images drwxr-xr-x 13 morpheous morpheous 12288 2010-04-09 15:25 images web/uploads/../images/thumbnail drwxr-xr-x 3 morpheous morpheous 4096 2010-02-24 20:44 thumbnail Can someone kindly tell me how to set the permissions so that my website (presumably running as the Apache daemon) can write the files to the directory required above?

    Read the article

  • Mac OS X Lion (10.7) Drive Encryption

    - by Skoota
    My iMac has two drives (a 256 GB solid-state drive, and regular 2 TB hard drive). The Mac OS X Lion system is installed on the solid-state drive and, like many other users, I have moved my user profile folder onto the secondary 2 TB drive. However, as you may be aware, FileVault 2 on Mac OS X Lion (10.7) only encrypts the system drive. This leaves my data drive (containing my user profile folder, with all of my data) unencrypted. I am aware that work arounds for this issue exist (such as https://github.com/jridgewell/Unlock) but I am not happy with the results since they involve decrypting the data drive on startup using a LaunchDaemon (before any users have logged into the computer) essentially meaning that any user who logs onto the computer will see the unencrypted drive. I would like a method which will only unencrypted the data when an authorised user logs into the computer. As such, is there a way to do one of the following? Encrypt the entire data drive and only decrypt the drive when an authorised user logs into the computer. This would be equivalent behaviour to the Lion FileVault 2 feature, but on a secondary drive rather than the system drive. Encrypt only the user profile folder on the data drive, and only decrypt the folder when the user logs into the computer. This would be equivalent to the behaviour of FileVault 1 on previous versions of Mac OS X? I am happy to pay for a commercial third-party product that provides the required feature(s), but I have not yet been able to find one. Thanks in advance for any assistance.

    Read the article

  • Is it possible to trace the delegation path for a DNS lookup?

    - by Josh Glover
    I'm trying to determine why a Nagios host check is failing (hostnames and IPs have been changed to protect the guilty): : jmglov@laurana; host www.foo.com ;; connection timed out; no servers could be reached : jmglov@laurana; for ns in `grep -o '\([0-9]\+[.]\)\{3\}[0-9]\+$' /etc/resolv.conf`; do ping -qc 1 $ns; done PING 192.168.1.1 (192.168.1.1) 56(84) bytes of data. --- 192.168.1.1 ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 10.911/10.911/10.911/0.000 ms PING 192.168.1.2 (192.168.1.2) 56(84) bytes of data. --- 192.168.1.2 ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 0.241/0.241/0.241/0.000 ms So I know that my nameservers are reachable, meaning that some nameserver along the delegation path to the authoritative nameserver for my host is not responding. Is there an easy way to determine which nameserver this is (basically a traceroute for DNS)?

    Read the article

  • Getting old bluetooth headset out of standby on Windows 7

    - by luagh45
    I think this is the problem, and if not, I'm open to suggestions. I have an old Jabra BT200 that I used to use on my phone. When a phone call was coming in it would beep using its own noises (meaning the phone never rang inside the headset) and then I could push the 'answer/hang up' button and sound and mic would start working. I have now paired it with Windows 7, and it looks good. Under the playback menu I have 'Bluetooth Hands free Audio / Jabra BT200 (Mono Audio) / Ready', and under the recording menu I have 'Bluetooth Audio Input Device / Jabra BT200 (Mono Audio) / Ready'. However when I try to test the speakers Windows sends a sound, but I never hear it, and when I talk in the mic, Windows never hears me. If I right click either the Bluetooth mic or speakers there's an option to 'Connect', but it's grayed out and I cannot click it. As the final piece of knowledge I have, my headset blinks once every 3 seconds when it's in standby and I can't get that to change. If everything was working it should blink once every second at which point I think all of my problems would be fixed. Hence my issue: I can't seem to get my headset out of standby. On my headset I've tried sending it test noises and then pressing the 'Answer' button, but still nothing. The headset beeps when I press it, so it works, it just doesn't ever come out of standby. Is there maybe some way to trick my headset into thinking it's getting a phone call from my computer?

    Read the article

  • Keep Windows Installer from using largest drive for temporary files

    - by stefan.at.wpf
    By default Windows Installer uses the largest drive for temporary storage, no matter if that's needed (meaning there would also be enough space on the system drive). Taken from http://msdn.microsoft.com/en-us/library/aa371372%28VS.85%29.aspx: During an administrative installation the installer sets ROOTDRIVE to the first connected network drive it finds that can be written to. If it is not an administrative installation, or if the installer can find no network drives, the installer sets ROOTDRIVE to the local drive that can be written to having the most free space. Now my system drive is an SSD, my largest drive is a RAID, that spins down when it's not used. Remember the SSD as system drive? Everything is silent now! Until I install something and Windows Installer wakes up my RAID again just to put a small .tmp file on it... How can I prevent Windows Installer from using the largest drive as temporary storage? Can I maybe set some access rights to disallow the Windows Installer to write on my RAID drive? Any other ideas? Thank you!

    Read the article

  • how do you add an A record for a root domain

    - by nbv4
    this seems really simple, but I can't figure it out. I'm using xname.org since it's free and I own a bunch of domains spread over a few different registrars. The setup I desire is very simple: one A record that points the bare domain name to my server, plus a wildcard CNAME record pointing all subdomains to the same server. So if the user goes to domain.com it will point them to 285.24.435.75, if they go to www.domain, blah.domain.com, or any other sub domain, they'll get sent to 285.24.435.75. All the examples I read on the internet about setting up A records all have the A record set to a subdomain such as www. WWW is deprecated so I want to have noting to do with it. Currently my xname.org zone looks like this: $TTL 86400 ; Default TTL domain.com. IN SOA ns0.xname.org. nbvfour.gmail.com. ( 2010052503 ; serial 10800 ; Refresh period 3600 ; Retry interval 604800 ; Expire time 10800 ; Negative caching TTL ) $ORIGIN domain.com. IN NS ns2.xname.org. IN NS ns0.xname.org. IN NS ns1.xname.org. @ IN A 65.49.73.148 * IN CNAME domain.com The '@' symbol is something that the godaddy domain interface uses to mean "this root domain', but that may have been specefic to that interface and has no meaning here. Before I had a 'www' entry in the A rcords and it worked in the sense that I could ping 'www.domain.com' and it returned a response, but pinging the root domain 'domain.com' returned "no host found".

    Read the article

  • Per connection bandwidth limit

    - by Kyr
    Apparently, our server box running Windows Server 2008 R2 has a per connection bandwidth limit of 0.2 MB/s. Meaning, while one TCP connection can pull at max 0.2 MB/s, 60 parallel connections can pull 12 MB/s. We first noticed this when trying to checkout large SVN repository from this server. I used a simple Java application to test this, transferring data from server to workstation using variable number of threads (one connection per thread). Server part of the application simply writes 1 MB memory buffer to socket 100 times, so there is no disk involvement. Each connection topped at 0.2 MB/s. Same per connection limit was for only one as was for 60 parallel connections. The problem is that I have no idea from where this limit comes from. I have very little experience administrating Windows Server, so I was mostly trying to find something by googling. I have checked the following: Local Computer Policy QoS Packet Scheduler Limit reservable bandwidth: it's Not configured; Group Policy Management Console: we have two GOPs, but neiher has any Policy-based QoS defined; There isn't any bandwidth limiter program installed, as far as I can tell. We're using standard Windows Firewall. I can update this question with any additional information if needed.

    Read the article

  • Anonymous access to SMB share hosted on Server 2008 R2 Enterprise

    - by bwerks
    Hi all, First off, I have read through this post and a whole slew of non-SF posts which seem to address the same or a similar problem, however I was still unable to fix my problem. I've got three machines in this situation: a domain-joined server that runs Server 2008 R2 Enterprise ("share server") a domain-joined workstation running XP Pro SP3 ("test server") a domain-unjoined test server running Server 2003 R2 SP2 ("workstation") The share server is exposing a share on the network that the test server must access--it's a Source/Symbol Server share for our debugging purposes. I believe visual studio simply accesses the the share with its own credentials in this case, meaning that the share must be accessible anonymously since the test server isn't joined to the domain and there's no opportunity to supply domain authentication. I've attempted a lot of things to avoid the authentication window when accessing the share: I've enabled the Guest account on the share server and given Guest full sharing/NTFS permissions for the share. I've given ANONYMOUS LOGON full sharing/NTFS permissions for the share. I've added my share to “Network Access: Shares that can be accessed anonymously” in LSP. I've disabled “Network access: Restrict anonymous access to Named Pipes and Shares” in LSP. I've enabled “Network access: Let Everyone permissions apply to anonymous users” in LSP. Added ANONYMOUS LOGON to “Access this computer from the network” in LSP. Added the Guest account to “Access this computer from the network” in LSP. Attempted to provision the share using the Share and Storage Management MMC snap-in. Unfortunately when I attempt to access the share from the test server, I still see the prompt and I'm forced to enter "Guest" manually. I also tried this workflow using the local administrator account on a workstation, and the same thing happens both with and without XP Simple File Sharing enabled. Any idea why I'm getting these results, or what I should have done differently?

    Read the article

  • Sandra reports my CPU as "Engineering Sample", how can I be sure this is correct?

    - by stevenvh
    I ran SiSoftware's Sandra on my new PC, and for my CPU it reports: Generation : G8 / T29 Name : TN0 (Trinity) FX/Opteron 32nm (ES) Revision/Stepping : 0 : 10 / 1 Stepping Mask : TN-A1 Microcode : MU6F10010F The (ES) is a well-known code in product development, meaning "Engineering Sample". Those are beta versions of the CPU, which still may contain some bugs, or even have features switches off. I contacted both the PC's manufacturer Medion as well as AMD about this. I had to downvote the Medion helpdesk here. The person I talked to boldly said Sandra was wrong (without knowing how Sandra got this information; he didn't even know the software), and used the word "impossible". His conclusion was "We’re not taking this in consideration for service”. Right. So, if you like Medion for their good prices, but like good support even better, you may consider buying your PC elsewhere. AMD was more helpful, but wanted to be sure before replacing the part (which I find reasonable). They suggested that I dismount the cooler from the CPU to check what was printed on it to be sure. I'm a bit reluctant here: I would have to wipe the thermal paste from the CPU, and won't know for sure my cooling will still be OK afterwards. Questions Has anybody actually found a confirmed ES CPU in her PC? Is anybody aware of Sandra erroneously reporting CPUs as Engineering Samples? How can you tell an ES, apart from the print on the package? Shouldn't Stepping Mask identify the CPU uniquely?

    Read the article

  • What do the readonly attributes in diskpart really mean?

    - by marzipan
    I am wondering exactly what the meaning is of the "Read-only" disk and volume attributes that you can twiddle in diskpart on Windows 7. I am trying to set up an external USB drive as an installation medium for my own software, so I'd like to protect it against casual or inadvertent changes by users who it is given to, so they don't screw up the installation files they might need in the future. From what I can tell by experimentation with diskpart, the volume read-only attribute is actually stored on the physical disk somewhere, because I can set it and it shows up when I take the drive to another machine. This is great because my users can't (easily) change any of the files on the volume, or format it from Windows explorer. However, the disk read-only attribute seems to be just an aspect of how the current machine is accessing the drive. When I set it I can no longer delete the volume in the disk via Disk Management, but when I take the drive to another machine, the attribute is no longer set and in Disk Management I can delete the volume on the disk. I guess I'm not that worried about my users doing that, but I am annoyed that I don't understand what these attributes are really doing. Another thing that I don't understand is that the "volume" read-only attribute actually seems to be global to the disk - if I have two volumes on the disk, and I set the readonly flag on one of them, then it gets set on the other one too. ?!? I have the feeling I'm not searching for the right docs - all I'm finding is diskpart docs that give the syntax for twiddling these attributes, not what they really mean. Any pointers would be very welcome! Thanks, Asa

    Read the article

  • Unable to specify parameters to cvlc in a script

    - by VxJasonxV
    I'm creating a script that issues a few curl commands in order to access a time-protected mms stream link, then set up a relay using cvlc (vlc's command line interface) for my own use on an unencumbered player. The curl aspect of this is working, as I can run as a browser and curl side by side and get the same access url. (It's time locked meaning the stream will work forever, but you have to connect quickly or the URL will time out.) The very end of the script prints the command I will run, which is then followed up by "exec $CMD". When I echo $CMD I get: cvlc --sout '#standard{access=http,mux=asf,dst=0.0.0.0:58194}' mms://[...] Manually Copy/Pasting this command in, verbatim, works perfectly fine, but as part of a script, the cvlc execution output says: [0x9743d0] main interface error: no suitable interface module [0x962120] main libvlc error: interface "globalhotkeys,none" initialization failed [0x9743d0] dummy interface: using the dummy interface module... [0xb16e30] stream_out_standard stream out error: no mux specified or found by extension [0xb16ad0] main stream output error: stream chain failed for `standard{mux="",access="",dst="'#standard{access=http,mux=asf,dst=0.0.0.0:58194}'"}' [0xb11cd0] main input error: cannot start stream output instance, aborting [0xb11f70] signals interface error: Caught Interrupt signal, exiting... Why is --sout behaving one way in a script (non-interactive shell?) vs. another way in the foreground (interactive shell) ?

    Read the article

  • Ram question in VMware Server 2

    - by ToreTrygg
    Hi, I understand from the VMware Server 2 documentation that VMware Server 2 is capable of running a 64-bit guest OS underneath a 32-bit host OS, as long as the hardware running the box is 64-bit capable. Here's my situation. We currently have an underutilized XEON X3220 Quad Core 64bit Server, running Server 2003, 32-bit and 2gb of RAM (the motherboard is capable of 8gb ram). The server is currently used mainly for file and print services. It is also running Active Directory, Novell eDirectory and Groupwise 6.5. We are planning a micration to Microsoft Exchange, so the Novell eDirectory and Groupwise services will eventually be purged from this box, leaving only Active Directory, File and Print services. Being that this server is underutilized we are hoping to save hardware costs and virtualize our new Exchange investment. My question is this. Will VMware allow access to the "invisible" extra memory that Windows 32-bit won't see. Meaning, if we increase the full amount of system ram to 8gb (yes, I know the 32-bit host OS will only see a maximum of 4gb), will I be able to assign maybe 5gb to the new Server 2008 64-bit OS running Exchange and leave 3gb for the Guest OS (or maybe even a 6, 2 split). The second part of that would be, would it be better to just convert the main OS currently running to an image, convert the machine itself to ESXi and run both OSes as images under ESXi. Downtime for this box is critical, so my preference is most definitly with the first option because it presents very minimal downtime. Doing the second would make downtime quite a few hours to image the machine and then convert the image to a VMware Image.

    Read the article

  • IPSec tunnelling with ISA Server 2000...

    - by Izhido
    Believe it or not, our corporate network still uses ISA Server 2000 (in a Windows Server 2003 machine) to enable / control Internet access to / from it. I was asked recently to configure that ISA Server to create a site-to-site VPN for a new branch in a office about 25 km. away from it. The idea is basically to enable not only computers, but also Palm devices (WiFi-enabled, of course), to be able to see other computers in both sites. I was also told that a simple VPN-enabled wireless AP/router (in this case, a Cisco WRV210 unit) should be enough to establish communications with the main office. To be fair, the router looks easy to configure; it was confusing at first, but further understanding of how site-to-site VPNs work cleared all doubts about it. Now I need to make modifications to our ISA Server in order to recognize the newly installed & configured "remote" VPN site. Thing is, either my Googling skills are pathethically horrible, or there doesn't seem to be much (or any, at all) information about how to configure an ISA Server 2000 for this purpose. Lots of stuff on 2004, of course; also, I think I saw something for 2006. But nothing I could find about 2000. Reading about 2004, it seems that the only way I can do site-on-site with a Cisco router (read: a non-ISA-Server machine) is through something they call a "IPSec tunnel". Fair enough. However, I can't figure for the life of me how could I even start to find, leave alone configure, such a thing. Do you, people, happen to know how to do IPSec tunelling on a ISA Server 2000, so I can connect to a Cisco WRV210 VPN-enabled router, and build a site-to-site VPN for both networks? Or is this not possible at all? (Meaning I should change anything in this configuration to make it work...)

    Read the article

  • BIND master/slave does not respond for queries for its slave

    - by Savas
    Systems are all Centos 6.2 Lets say I have a masterdns with IP 10.2.1.2, authoritative for the 10.2.1.X subnet and let say it is domain example.com I have another two subnets, 10.2.2.X and 10.1.2.X Each one has its own DNS server, dns2 and dns1 respectively and let say these are domains dom2.example.com and dom2.example.com respectively. The masterdns server has slave zones for dns1, dns2 and respond to requests OK. The dns1, dns2 have the masterdns zones as slaves two, and respond to requests OK. So, the masterdns has as slave zones all the subordinate domains of example.com Each of dns1 and dns2 use masterdns as a forwards (which uses another dns cache/proxy server) for dns resolution of internet public domain names. It works OK that too. The problem is, and I cannot figure it out. Why queries for example at dns1 for hostnames of dom2.example.com do not resolve? If i use nslookup - masterdns at dns1 server, resolve (i use directly the dns facility of masterdns). If I use nslookup locally, meaning queries are sent to dns1, for hosts that are at dom2.example.com, they do not resolve. Everything other works OK.

    Read the article

  • Affordable combined Ruby/Rails/Redmine + Subversion hosting?

    - by Pekka
    I'm a self employed web developer and after nine years of hard work, I'm looking to become a bit more "vagrant" starting next year, do some much-needed traveling and a bit and work off and on, making use of one of the greatest advantages of a programming job: The ability to work virtually from everywhere. For that, I am looking for a reliable hosting company I can entrust my code to in the form of a number of Subversion repositories, and an installation of the Redmine project management tool. As my financial situation may vary during traveling, I am looking for something I can pay up front for a year or two, and is obviously not too pricey. I don't care where the company is located, as long as it's trustworthy and solid, meaning it's not likely to go out of business next month. Does anybody know good recommendations? Preferably from own, personal, good experience. I have looked at CVSDude / Codesion and while they are certainly great, they don't offer Redmine of course, and seem to be aiming toward bigger organizations mainly. What I would need: 2-5 Gigs of space minimum, freely distributable between SVN, and Redmine attachments Unlimited number of Subversion projects Access control (team members / checkout-only accounts / etc.) I don't mind configuring the svn settings on file basis myself I need the possibility to map a custom domain to the package that is hosted elsewhere Frequent backups and access to those backups through FTP or other means I have been running my own virtual server for this until now, but I don't want the hassle, especially on the security side, while I may not always have the internet connection to fix problems that may come up.

    Read the article

< Previous Page | 75 76 77 78 79 80 81 82 83 84 85 86  | Next Page >