Search Results

Search found 27501 results on 1101 pages for 'power view'.

Page 481/1101 | < Previous Page | 477 478 479 480 481 482 483 484 485 486 487 488  | Next Page >

  • Find out the Mac address!!

    - by pirates-iiita
    hello everyone, Can anyone please tell the way to find out the mac address of a system which is: 1. Shutdown 2. Power plugged in 3. Connected to Lan 4. Nic card ON. Kindly post the solution as i urgently need it in my project... thank you

    Read the article

  • How to achieve a specific fraction(say 80%) of the cpus and balanced over them

    - by swellfr
    Hi, I was wondering if it would be possible to run app not at 100% of the cpu but at a specific amount of the cpus. I see different usage of this , we can better balance concurrent application ( we may want to have balance app 50% to have fair apps/agent/... ) i was also wondering if the power consumption would not be better if the cpus doesnt run at full throttle but at some lower level( say 80% ) What are your thoughts Thx examples are welcomed :)

    Read the article

  • Lower CPU % used by FFMPEG Process via PHP (FLV Video Conversion)

    - by BoRo
    Hi, Okay, so I currently execute an ffmpeg command via PHP to run a video conversion. The Problem I'm Having is during the conversion, the ffmpeg process(es) use up so much CPU/Processing Power (near 100%), which slows down the response of my webserver. Is there a Way (crontab or script) I can limit the ffmpeg processes to a certain CPU percentage? Thanks,

    Read the article

  • VLC (Server) re-stream Security Camera Feed

    - by Aaron
    I purchased a Swann Home Security DVR system and was hoping for some help on how to duplicate the streaming video on my server. In order to get their web view (streaming video in the browser) to work, I had to install the following plugins: HiDvrPlugin.dmg for mac. Hidvrocx.cab for Windows. I was originally thinking it was a sign of some form of DRM? Maybe. Maybe not. HTML wise, the following code is in the source of the safari version of the web view: <embed pluginspage="SurveilClient.dmg" width="10px" height="10px" type="application/x-scplugin" id="MacDiv" style="height: 592px; width: 720px; left: 278px; top: 61px; "> It seems to be the main display area. Using wireshark, I am able to see that the video stream is on port 9000. However, I have no idea what type of stream it is. I've tried opening it in VLC with no such luck. http://dvr_ip:9000 tcp://dvr_ip:9000 My hope was to do the following to redistribute the feed vlc dvr_ip:9000 --sout h264-version-on-localhost:3000 TLDR; Trying to re-distribute a stream from a security camera (can't tell the format) using vlc (re-distribute via h.264 / HTML5). Not sure how to accomplish this. Is it possible that the software has some type of DRM that only the plugins can decode?

    Read the article

  • Plesk SSL Certificate (Default cert when SSL enabled, CORRECT cert when SSL is disabled)

    - by hztetra
    I'm running Plesk 8.6.0: I have an SSL cert installed through Plesk's admin interface. But I have a bit of an issue: When I enabled SSL for the site, and selected my cert, then restart httpd, Plesk defaults to using my self-signed default certificate. Conversely, when I disable SSL support for the domain, all of a sudden Plesk is using my new SSL certificate. Unfortunately, when I try to view any folder on the site (mydomain.tld/folder) I'm simply met with a 404 (with files placed both in httpdocs and httpsdocs). I switch SSL support back on, and Plesk defaults back to the default self-signed cert and I can then view the folders that were not previously accessible. Any ideas? One further note: I tried following http://kb.parallels.com/en/939 . Once I tried to restart httpd with the edited ssl.conf file, I received an httpd could not start error. I restored the original ssl.conf file, and still received the could not start error. So as of now, I am running without an ssl.conf file. The following is the error I receive when I attempt to reintroduce ssl.conf: Starting httpd: [Mon Aug 23 15:45:40 2010] [warn] module ssl_module is already loaded, skipping (98)Address already in use: make_sock: could not bind to address 0.0.0.0:443 no listening sockets available, shutting down Unable to open logs

    Read the article

  • ESXi 4.1 host not recognising existing VMFS datastore

    - by Graeme Donaldson
    Existing setup: host1 and host2, ESX 4.0, 2 HBAs each. lun1 and lun2, 2 LUNs belonging to the same RAID set (my terminology might be sketchy here). This has been working just fine all along. I added host3, ESXi 4.1, 2 HBAs. If I view Configuration / Storage Adapters, I can see that both HBAs see both LUNs, but if I view Configuration / Storage, I only see 1 datastore. host1/2 can see both LUNs and I have VMs running on both too. I have rescanned, refreshed and even rebooted, but host3 refuses to acknowledge 1 of the datastores. Does anyone know what's going on? Update: I re-installed the host with ESX (not i) 4.0, same version as the existing hosts and it's still not recognising the vmfs. I think I'm going to SVmotion everything off that datastore then format it. Update2: I've created the LUN from scratch and the problem gets even weirder. I've presented the LUN to all 3 hosts, and I can see the LUN in the vSphere client's Configuration / Storage Adapters section on all 3 hosts. If I create a datastore on the LUN via the Configuration / Storage section on host1, it works fine and I can create an empty folder via datastore browser, but the datastore is not seen by the host2 and host3. I can use the Add Storage wizard on host2 and it will see the LUN. At this point the "VMFS Label" column has the label I gave with "(head)" appended. If I try the Add Storage wizard's "Keep the existing signature" option, it fails with an error "Cannot change the host configuration." and a dialog box that says 'Call "HostStorageSystem.ResolveMultipleUnresolvedVmfsVolumes" for object "storageSystem-17" on vCenter Server "vcenter.company.local" failed.' If I try the Add Storage wizard's "Assign a new signature" option on host2, it will complete and the VMFS label will have "snap-(hexnumber)-" prepended. At this point its also visible on host3, but not host1. I have a similar setup in a different datacenter which didn't give me all this trouble.

    Read the article

  • Bypass BIOS password set by faulty Toshiba firmware on Satellite A55 laptop?

    - by Brian
    How can the CMOS be cleared on the Toshiba Satellite A55-S1065? I have this 7 year old laptop that has been crippled by a glitch in its BIOS: 'A "Password =" prompt may be displayed when the computer is turned on, even though no power-on password has been set. If this happens, there is no password that will satisfy the password request. The computer will be unusable until this problem is resolved. [..] The occurrence of this problem on any particular computer is unpredictable -- it may never happen, but it could happen any time that the computer is turned on. [..] Toshiba will cover the cost of this repair under warranty until Dec 31, 2010.' -Toshiba As they stated, this machine is "unusable." The escape key does not bypass the prompt (nor does any other key), thus no operating system can be booted and no firmware updates can be installed. After doing some research, I found solutions that have been suggested for various Toshiba Satellite models afflicted by this glitch: "Make arrangements with a Toshiba Authorized Service Provider to have this problem resolved." -Toshiba (same link). Even prior to the expiration of Toshiba's support ("repair under warranty until Dec 31, 2010"), there have been reports that this solution is prohibitively expensive, labor charges accruing even when the laptop is still under warranty, and other reports that are generally discouraging: "They were unable to fix it and the guy who worked on it said he couldn’t find the jumpers on the motherboard to clear the BIOS. I paid $39 for my troubles and still have the password problem." - Steve. Since the costs of the repairs can now exceed the value of the hardware, it would seem this is a DIY solution, or a non-solution (i.e. the hardware is trash). Build a Toshiba parallel loopback by stripping and soldering the wires on a DB25 plug to connect connect these pins: 1-5-10, 2-11, 3-17, 4-12, 6-16, 7-13, 8-14, 9-15, 18-25. -CGSecurity. According to a list of supported models on pwcrack, this will likely not work for my Satellite A55-1065 (as well as many other models of similar age). -pwcrack Disconnect the laptop battery for an extended period of time. Doesn't work, laptop sat in a closet for several years without the battery connected and I forgot about the whole thing for awhile. The poor thing. Clear CMOS by setting the proper jumper setting or by removing the CMOS (RTC) battery, or by short circuiting a (hidden?) jumper that looks like a pair of solder marks -various sources for various Satellite models: Satellite A105: "you will see C88 clearly labeled right next the jack that the wireless card plugs into. There are two little solder squares (approx 1/16") at this location" -kerneltrap Satellite 1800: "Underneath the RAM there is black sticker, peel off the black sticker and you will reveal two little solder marks which are actually 'jumpers'. Very carefully hold a flat-head screwdriver touching both points and power on the unit briefly, effectively 'shorting' this circuit." -shadowfax2020 Satellite L300: "Short the B500 solder pads on the system board." -Lester Escobar Satellite A215: "Short the B500 solder pads on the system board." -fixya Clearing the CMOS could resolve the issue, but I cannot locate a jumper or a battery on this board. Nothing that looks remotely like a battery can be removed (everything is soldered). I have looked closely at the area around the memory and do not see any obvious solder pads that could be a secret jumper. Here are pictures (click for full resolution) : Where is the jumper (or solder pads) to short circuit and wipe the CMOS on this board? Possibly related questions: Remove Toshiba laptop BIOS password? Password Problem Toshiba Satellite..

    Read the article

  • IMAP proxy as a POP3 hub?

    - by mailman stan
    Simple scenario, complicated technology: One family receiving mail from five email addresses via POP3 into one Outlook inbox on a single PC. Now we'd like to be able to replicate that single inbox across multiple devices (eg. desktop PC, laptop, netbook, smartphone). If we continue using POP3 as the mail transfer protocol, messages will be downloaded to one device and will not be visible to the others; replies will likewise be isolated on the sending machine. If we switch to IMAP, I understand that we can have multiple devices maintaining a shared view of an inbox hosted at the server end, but what about multiple accounts? I tried changing the account configuration in Outlook to fetch from the mail providers' IMAP service instead of POP3, which does give a shared view across multiple devices but also causes Outlook to create a separate inbox and PST for each account. This is awkward because it means there are five separate folders that need to be checked, and Outlook tools like search filters and rules don't seem to work across accounts. To get what I want (five accounts delivered into one shared mailbox) it seems that I would need some sort of intervening server that collects mail (using POP3) from all our accounts into a single inbox while preserving the original destination addresses, and then serves it up to all our devices using IMAP. Is this workable? Is it a good approach? Is there an easier way?

    Read the article

  • Wildcard DNS, VirtualHosts on apache2, 404 for unused subdomains

    - by niel
    On an Apache2 server linked to by a DNS that includes a wildcard entry, e.g. *.example.com, subdomains that are not defined as ServerNames in any VirtualHosts point to the first defined VirtualHost, in my example this is 000-default. My Question:How would one get unused subdomains (subdomains not used in any virtualhosts) to return a 404 error to the requesting client? This must preferably show in server logs as a 404 as well. I have looked into the following possibilities: Redirecting any invalid subdomain to the home page or some other page.The problem with this method is, when someone links to your site as this.company.sucks.example.com, the client will see your home page or in my case 000-default if I do not redirect. Thanks, to Mike for pointing this out. (regex for "suck", etc definately not an option) Let the default VirtualHost point to a non-existent directory.Apache does not like this one bit, warning with every reload. Beyond the warning, everything seems fine. This seems like a hack. Does this seem like a problem (however small) to anyone? Point the default VirtualHost to a folder where the index.php is forbidden, thus creating a 403 status code.This is confusing and makes things like the following overly complicated: Say, for example, you use a subdomain per user (a big reason to use wildcard DNS, apparently), and users have the ability to view each others profiles at username.example.com. This solution is confusing to the user and completely not what I want to do. My ideal sollution will let the user know there is nothing to view at the url he entered. Preferably with a 404 and an error log entry for the address entered (not some other address). Any help would be greatly appreciated!

    Read the article

  • Using GitOAuthPlugin for Jenkins - not working as expected

    - by Blundell
    I need some clarity and maybe a fix. I'm using this plugin to authorise who views our Jenkins ci server: https://wiki.jenkins-ci.org/display/JENKINS/Github+OAuth+Plugin As I understand it anyone who is auth'd to view one of our github project's can also login to our Jenkins box. This works I thought it would also allow the person logging in to only view the Project that they have GitHub permission on. For instance. Three projects on GitHub (A,B,C). Three builds on Jenkins. User 1 has Git access to all 3 projects (A B C). User 2 has Git access to only 1 project (A). When logging into Jenkins: User 1 can see all 3 projects ( this works ) User 2 can only see project A The problem is User 2 can also see all 3 projects when they should only see 1! Have I got this correct, and if so is this a bug? I have the settings set in Jenkins configuration Github Authorization Settings. Here we have some admin users. One organization. And none out of the 4 checkboxes ticked. (User 2, is not an admin, is not part of the org). The plugin is open sourced here: https://github.com/mocleiri/github-oauth-plugin I was trying to get Jenkins to print me the Logs from the plugin but I also failed at viewing these (to see if there was an issue). I followed these instructions: https://wiki.jenkins-ci.org/display/JENKINS/Logging It's the same concept as outlined below but using GitHub rather than manually selecting users: https://wiki.jenkins-ci.org/display/JENKINS/2012/01/03/Allow+access+to+specific+projects+for+Users%28Assigning+security+for+projects+in+Jenkins%29 Have I got this right or wrong? Is it possible to auth a Jenkins user to only see one project?

    Read the article

  • Is there a tool that can test what SSL/TLS cipher suites a particular website offers?

    - by Jeremy Powell
    Is there a tool that can test what SSL/TLS cipher suites a particular website offers? I've tried openssl, but if you examine the output: $ echo -n | openssl s_client -connect www.google.com:443 CONNECTED(00000003) depth=1 /C=ZA/O=Thawte Consulting (Pty) Ltd./CN=Thawte SGC CA verify error:num=20:unable to get local issuer certificate verify return:0 --- Certificate chain 0 s:/C=US/ST=California/L=Mountain View/O=Google Inc/CN=www.google.com i:/C=ZA/O=Thawte Consulting (Pty) Ltd./CN=Thawte SGC CA 1 s:/C=ZA/O=Thawte Consulting (Pty) Ltd./CN=Thawte SGC CA i:/C=US/O=VeriSign, Inc./OU=Class 3 Public Primary Certification Authority --- Server certificate -----BEGIN CERTIFICATE----- MIIDITCCAoqgAwIBAgIQL9+89q6RUm0PmqPfQDQ+mjANBgkqhkiG9w0BAQUFADBM MQswCQYDVQQGEwJaQTElMCMGA1UEChMcVGhhd3RlIENvbnN1bHRpbmcgKFB0eSkg THRkLjEWMBQGA1UEAxMNVGhhd3RlIFNHQyBDQTAeFw0wOTEyMTgwMDAwMDBaFw0x MTEyMTgyMzU5NTlaMGgxCzAJBgNVBAYTAlVTMRMwEQYDVQQIEwpDYWxpZm9ybmlh MRYwFAYDVQQHFA1Nb3VudGFpbiBWaWV3MRMwEQYDVQQKFApHb29nbGUgSW5jMRcw FQYDVQQDFA53d3cuZ29vZ2xlLmNvbTCBnzANBgkqhkiG9w0BAQEFAAOBjQAwgYkC gYEA6PmGD5D6htffvXImttdEAoN4c9kCKO+IRTn7EOh8rqk41XXGOOsKFQebg+jN gtXj9xVoRaELGYW84u+E593y17iYwqG7tcFR39SDAqc9BkJb4SLD3muFXxzW2k6L 05vuuWciKh0R73mkszeK9P4Y/bz5RiNQl/Os/CRGK1w7t0UCAwEAAaOB5zCB5DAM BgNVHRMBAf8EAjAAMDYGA1UdHwQvMC0wK6ApoCeGJWh0dHA6Ly9jcmwudGhhd3Rl LmNvbS9UaGF3dGVTR0NDQS5jcmwwKAYDVR0lBCEwHwYIKwYBBQUHAwEGCCsGAQUF BwMCBglghkgBhvhCBAEwcgYIKwYBBQUHAQEEZjBkMCIGCCsGAQUFBzABhhZodHRw Oi8vb2NzcC50aGF3dGUuY29tMD4GCCsGAQUFBzAChjJodHRwOi8vd3d3LnRoYXd0 ZS5jb20vcmVwb3NpdG9yeS9UaGF3dGVfU0dDX0NBLmNydDANBgkqhkiG9w0BAQUF AAOBgQCfQ89bxFApsb/isJr/aiEdLRLDLE5a+RLizrmCUi3nHX4adpaQedEkUjh5 u2ONgJd8IyAPkU0Wueru9G2Jysa9zCRo1kNbzipYvzwY4OA8Ys+WAi0oR1A04Se6 z5nRUP8pJcA2NhUzUnC+MY+f6H/nEQyNv4SgQhqAibAxWEEHXw== -----END CERTIFICATE----- subject=/C=US/ST=California/L=Mountain View/O=Google Inc/CN=www.google.com issuer=/C=ZA/O=Thawte Consulting (Pty) Ltd./CN=Thawte SGC CA --- No client certificate CA names sent --- SSL handshake has read 1777 bytes and written 316 bytes --- New, TLSv1/SSLv3, Cipher is AES256-SHA Server public key is 1024 bit Compression: NONE Expansion: NONE SSL-Session: Protocol : TLSv1 Cipher : AES256-SHA Session-ID: 748E2B5FEFF9EA065DA2F04A06FBF456502F3E64DF1B4FF054F54817C473270C Session-ID-ctx: Master-Key: C4284AE7D76421F782A822B3780FA9677A726A25E1258160CA30D346D65C5F4049DA3D10A41F3FA4816DD9606197FAE5 Key-Arg : None Start Time: 1266259321 Timeout : 300 (sec) Verify return code: 20 (unable to get local issuer certificate) --- it just shows that the cipher suite is something with AES256-SHA. I know I could grep through the hex dump of the conversation, but I was hoping for something a little more elegant. I would prefer Linux tools, but Windows (or other) would be fine. This question is motivated by the security testing I do for PCI and general penetration testing. Update: GregS points out below that the SSL server picks from the cipher suites of the client. So it seems I would need to test all cipher suites one at a time. I think I can hack something together, but is there a tool that does particularly this?

    Read the article

  • Do I have to do anything for file ACLs to work both before and after I reformat a computer?

    - by Zian Choy
    Let's say that I have a computer running Windows Vista with 2 users: Alice and Bob. Alice is the admin and Bob is a normal user. They each have files in their respective My Documents folders and Bob is not allowed to view Alice's files and Alice has to jump through a UAC elevation to view Bob's. If Alice copies all the files on the computer to an external NTFS-formatted hard drive with the following 2 commands: robocopy "E:\Bob's Files" "C:\Users\Bob\My Documents" /MIR robocopy "E:\Alice's Files" "C:\Users\Alice\My Documents" /MIR And then reformats the hard drive, installs a fresh copy of Windows, and creates 2 users named Alice and Bob on the computer, then will everything in the first paragraph be true after Alice copies the files back onto the internal hard drive? Assume that when the files are copied back over, she logs in as Bob and then copies Bob's files and likewise with her own files. Possibly relevant: Alice and Bob also have passwords on their user accounts and they create new passwords after the computer is reformatted. The main post has been tweaked slightly to make the question clearer. Answers that predate April 2011 are referring to an earlier version of this post.

    Read the article

  • How to utilize 4TB HDD, which is showing up as 2.72TB

    - by mason
    I have two internal HDD's. They're both 4TB capacity. They're both formatted with the GPT partitioning scheme, and they're Basic Discs (not dynamic). I'm on Windows 8 64bit. I have UEFI, not BIOS. When I view the discs in Computer Management MMC with Disk Management, they show that each partition is formatted as NTFS and takes up the entire drive. And it shows that each drive has a capacity of 3725.90GB in the bottom section of Disk Management, but 2.794.39GB in the top section. When I view the discs in "My Computer"/"This PC" they only show up as 2.72TB, which matches the amount capacity I'm getting from some other 3TB HDD's I have. Why are they showing up as only 2.72GB? Will I be able to use the full 4TB capacity? Also of note, although I'm not sure it's relevant: I often get corrupted files on these two HDD's. None of my other HDD's give me corrupted files. Usually the problem is fixed by running chkdsk /f on the drives, but it's extremely annoying. In the picture below, it's the X: and Y: drives. Steps I've tried Flashed latest BIOS (MSI J.90 to K.30)

    Read the article

  • How to setup Calendar permissions for group to group

    - by Sorean
    I've been scouring the internet and so far have only been able to find examples of how to grant calendar permissions from one user to another using the Add-MailboxFolderPermission command. This is great and it was okay for when they only had a handful of users. But going forward it's not realistic to have to set individual calendar permissions for all calendars for each new user. Layout of security groups already created. Each group has a few people assigned to it. Techs Managers Admin What I am trying to accomplish is set it up so that anyone that belongs to the Managers group can view the calendars of the Tech group. Admins can view and edit the Tech group. I've found an example of adding just the security group name but I get an error of: [PS] C:\Windows\system32add-MailboxFolderPermission -Identity Techs:\Calendar -User "Admin" -AccessRights Owner The user "Admin" is either not valid SMTP address, or there is no matching information. + CategoryInfo : NotSpecified: (0:Int32) [Add-MailboxFolderPermission], InvalidExternalUserIdException + FullyQualifiedErrorId : 39352699,Microsoft.Exchange.Management.StoreTasks.AddMailboxFolderPermission Am I creating groups wrong? Am I using the wrong commands? Any guidance would be greatly appreciated.

    Read the article

  • DNS setup problems with Windows Azure VPS

    - by jbigelow
    What is the proper to setup the A record (or CNAME) for a Windows Azure VPS? I can't connect to my website after setting up IIS and believe I don't have the correct DNS setup. I created a small VPS instance with the default Windows Server 2012 configuration. I RDP'd in and added the Webserver role. In my DNSMadeEasy control panel I added an A record with my Public Virtual IP Address. In IIS I went to the default website and added bindings for the hostname of my website, so I should be able to type mywebsite.com and see the IIS 8 splash screen, but instead my browser cannot connect. I attempted to navigate to the site by typing in my Virtual IP address into the browser and still cannot connect. I RDP'd back into the machine and turned off Windows Firewall. No change, still cannot navigate to my website. From within IIS I double checked my binding. If I click "browse *:80" I can bring up my website in IE with the http:// localhost address. If I click "browse mywebsite on *.80" IE says "This page cannot be displayed.", from within the RDP session I can view the site if I navigate to http:// 127.0.0.1 but not if I navigate to my Virtual IP, nor can I view the page if I try navigating to http:// mywebservername.cloudapp.net I'm thinking I must be fundamentally not understanding how do DNS setup with Azure VPS but my initial Google searches aren't turning up any helpful information. (spaces added after the http:// so serverfault doesn't try and render them as valid urls.)

    Read the article

  • Take snapshot of drawing using FingerPaint

    - by Rashmi.B
    I am using MyView for drawing content on a canvas using FingerPaint API demo app. I want to capture whatever I have written on the canvas. But when I use View v1 = myview.getRootView() it is returning only the blank canvas and not the content. I want to save my drawing in SDCard. Following is my code. Let me know what do i need to change v1 = myview.getRootView(); System.out.println("v1 value = "+v1); v1.buildDrawingCache(true); v1.measure(MeasureSpec.makeMeasureSpec(0, MeasureSpec.UNSPECIFIED), MeasureSpec.makeMeasureSpec(0, MeasureSpec.UNSPECIFIED)); //v1.layout(0, 0, v1.getMeasuredWidth(), v1.getMeasuredHeight()); v1.layout(0, 0, 100, 100); //Bitmap b = Bitmap.createBitmap(v1.getDrawingCache()); myview.mBitmap = Bitmap.createBitmap(v1.getDrawingCache()); System.out.println("BITMAP VALue = "+myview.mBitmap); ByteArrayOutputStream bytes = new ByteArrayOutputStream(); //b.compress(Bitmap.CompressFormat.JPEG, 40, bytes); File f = new File(Environment.getExternalStorageDirectory()+ File.separator + "rashmitest.jpg"); try { f.createNewFile(); FileOutputStream fo = new FileOutputStream(f); fo.write(bytes.toByteArray()); } catch (Exception e) { e.printStackTrace(); } v1.setDrawingCacheEnabled(false); myview is an object of class MyView that extends View.

    Read the article

  • Changing Domain Name DNS to Redirect web traffic to one server, and leave mail to original server

    - by David S
    Hi there, Ok, quite the idiot with DNS.. apart from the basics. I have a domain name hosted with a domain registrar. It seems to have full DNS control (i.e. ability to view/edit A Records, Mail etc..) We have recently setup a server at Rackspace which hosts the new website The original/existing server (where the old website still is and Mail) is on another shared hosting companies server I went to the domain name registrar, and checked out the DNS management as follows: click here to view the DNS screenshot So obviously the A Record is pointing to the actual server where the website/mail is I figure, and the CNAME is pointing (alias?) to the website url. So my question is this: If I want the web traffic portion to go to the Rackspace/new server, but keep the mail going to where it is now, what do I have to change? Also, should I even change this info at the domain registrar? the rackspace server account has full DNS which seems to suggest I can point to their nameservers and then re-direct the MX (Mail) traffic to where the mail server is? Sorry if that was a bit confusing.. obviously in need of DNS training ;) Any help very appreciated. David.

    Read the article

  • Recommendations for distributed processing/distributed storage systems

    - by Eddie
    At my organization we have a processing and storage system spread across two dozen linux machines that handles over a petabyte of data. The system right now is very ad-hoc; processing automation and data management is handled by a collection of large perl programs on independent machines. I am looking at distributed processing and storage systems to make it easier to maintain, evenly distribute load and data with replication, and grow in disk space and compute power. The system needs to be able to handle millions of files, varying in size between 50 megabytes to 50 gigabytes. Once created, the files will not be appended to, only replaced completely if need be. The files need to be accessible via HTTP for customer download. Right now, processing is automated by perl scripts (that I have complete control over) which call a series of other programs (that I don't have control over because they are closed source) that essentially transforms one data set into another. No data mining happening here. Here is a quick list of things I am looking for: Reliability: These data must be accessible over HTTP about 99% of the time so I need something that does data replication across the cluster. Scalability: I want to be able to add more processing power and storage easily and rebalance the data on across the cluster. Distributed processing: Easy and automatic job scheduling and load balancing that fits with processing workflow I briefly described above. Data location awareness: Not strictly required but desirable. Since data and processing will be on the same set of nodes I would like the job scheduler to schedule jobs on or close to the node that the data is actually on to cut down on network traffic. Here is what I've looked at so far: Storage Management: GlusterFS: Looks really nice and easy to use but doesn't seem to have a way to figure out what node(s) a file actually resides on to supply as a hint to the job scheduler. GPFS: Seems like the gold standard of clustered filesystems. Meets most of my requirements except, like glusterfs, data location awareness. Ceph: Seems way to immature right now. Distributed processing: Sun Grid Engine: I have a lot of experience with this and it's relatively easy to use (once it is configured properly that is). But Oracle got its icy grip around it and it no longer seems very desirable. Both: Hadoop/HDFS: At first glance it looked like hadoop was perfect for my situation. Distributed storage and job scheduling and it was the only thing I found that would give me the data location awareness that I wanted. But I don't like the namename being a single point of failure. Also, I'm not really sure if the MapReduce paradigm fits the type of processing workflow that I have. It seems like you need to write all your software specifically for MapReduce instead of just using Hadoop as a generic job scheduler. OpenStack: I've done some reading on this but I'm having trouble deciding if it fits well with my problem or not. Does anyone have opinions or recommendations for technologies that would fit my problem well? Any suggestions or advise would be greatly appreciated. Thanks!

    Read the article

  • A faulty Caviar Blue hard drive?

    - by Glister
    We have a small "homemade" server running fully updated Debian Wheezy (amd64). One hard drive installed: WDC WD6400AAKS. The motherboard is ASUS M4N68T V2. The usual load: CPU: an average of 20% Each week about 50GB of additional space is occupied. About 47GB of uploaded files and 3GB of MySQL data. I'm afraid that the hard drive may be about to fail. I saw Pre-fail on few places when I ran: root@SERVER:/tmp# smartctl -a /dev/sda smartctl 5.41 2011-06-09 r3365 [x86_64-linux-3.2.0-4-amd64] (local build) Copyright (C) 2002-11 by Bruce Allen, http://smartmontools.sourceforge.net === START OF INFORMATION SECTION === Model Family: Western Digital Caviar Blue Serial ATA Device Model: WDC WD6400AAKS-XXXXXXX Serial Number: WD-XXXXXXXXXXXXXXXXXXX LU WWN Device Id: 5 0014ee XXXXXXXXXXXXX Firmware Version: 01.03B01 User Capacity: 640,135,028,736 bytes [640 GB] Sector Size: 512 bytes logical/physical Device is: In smartctl database [for details use: -P show] ATA Version is: 8 ATA Standard is: Exact ATA specification draft version not indicated Local Time is: Mon Oct 28 18:55:27 2013 UTC SMART support is: Available - device has SMART capability. SMART support is: Enabled === START OF READ SMART DATA SECTION === SMART overall-health self-assessment test result: PASSED General SMART Values: Offline data collection status: (0x85) Offline data collection activity was aborted by an interrupting command from host. Auto Offline Data Collection: Enabled. Self-test execution status: ( 247) Self-test routine in progress... 70% of test remaining. Total time to complete Offline data collection: (11580) seconds. Offline data collection capabilities: (0x7b) SMART execute Offline immediate. Auto Offline data collection on/off support. Suspend Offline collection upon new command. Offline surface scan supported. Self-test supported. Conveyance Self-test supported. Selective Self-test supported. SMART capabilities: (0x0003) Saves SMART data before entering power-saving mode. Supports SMART auto save timer. Error logging capability: (0x01) Error logging supported. General Purpose Logging supported. Short self-test routine recommended polling time: ( 2) minutes. Extended self-test routine recommended polling time: ( 136) minutes. Conveyance self-test routine recommended polling time: ( 5) minutes. SCT capabilities: (0x303f) SCT Status supported. SCT Error Recovery Control supported. SCT Feature Control supported. SCT Data Table supported. SMART Attributes Data Structure revision number: 16 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x002f 200 200 051 Pre-fail Always - 0 3 Spin_Up_Time 0x0027 157 146 021 Pre-fail Always - 5108 4 Start_Stop_Count 0x0032 098 098 000 Old_age Always - 2968 5 Reallocated_Sector_Ct 0x0033 200 200 140 Pre-fail Always - 0 7 Seek_Error_Rate 0x002e 200 200 051 Old_age Always - 0 9 Power_On_Hours 0x0032 079 079 000 Old_age Always - 15445 10 Spin_Retry_Count 0x0032 100 100 051 Old_age Always - 0 11 Calibration_Retry_Count 0x0032 100 100 051 Old_age Always - 0 12 Power_Cycle_Count 0x0032 098 098 000 Old_age Always - 2950 192 Power-Off_Retract_Count 0x0032 200 200 000 Old_age Always - 426 193 Load_Cycle_Count 0x0032 200 200 000 Old_age Always - 2968 194 Temperature_Celsius 0x0022 111 095 000 Old_age Always - 36 196 Reallocated_Event_Count 0x0032 200 200 000 Old_age Always - 0 197 Current_Pending_Sector 0x0032 200 200 000 Old_age Always - 0 198 Offline_Uncorrectable 0x0030 200 200 000 Old_age Offline - 0 199 UDMA_CRC_Error_Count 0x0032 200 160 000 Old_age Always - 21716 200 Multi_Zone_Error_Rate 0x0008 200 200 051 Old_age Offline - 0 SMART Error Log Version: 1 No Errors Logged SMART Self-test log structure revision number 1 Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error # 1 Short offline Completed without error 00% 15444 - Error SMART Read Selective Self-Test Log failed: scsi error aborted command Smartctl: SMART Selective Self Test Log Read Failed root@SERVER:/tmp# In one tutorial I read that the pre-fail is a an indication of coming failure, in another tutorial I read that it is not true. Can you guys help me decode the output of smartctl? It would be also nice to share suggestions what should I do if I want to ensure data integrity (about 50GB of new data each week, up to 2TB for the whole period I'm interested in). Maybe I will go with 2x2TB Caviar Black in RAID4?

    Read the article

  • Best practices for settings for Oracle database creation

    - by Gary
    When installing an Oracle Database, what non-default settings would you normally apply (or consider applying) ? I'm not after hardware dependent setting (eg memory allocation) or file locations, but more general items. Similarly anything that is a particular requirement for a specific application rather than generally applicable isn't really useful. Do you separate out code/API schemas (PL/SQL owners) from data schemes (table owners) ? Do you use default or non-default roles, and if the latter, do you password protect the role ? I'm also interested in whether there's any places where you do a REVOKE of a GRANT that is installed by default. That may be version dependent as 11g seems more locked down for its default install. These are ones I used in a recent setup. I'd like to know whether I missed anything or where you disagree (and why). Database Parameters Auditing (AUDIT_TRAIL to DB and AUDIT_SYS_OPERATIONS to YES) DB_BLOCK_CHECKSUM and DB_BLOCK_CHECKING (both to FULL) GLOBAL_NAMES to true OPEN_LINKS to 0 (did not expect them to be used in this environment) Character set - AL32UTF8 Profiles I created an amended password verify function that used the apex dictionary table (FLOWS_030000.wwv_flow_dictionary$) as an extra check to prevent simple passwords. Developer logins CREATE PROFILE profile_dev LIMIT FAILED_LOGIN_ATTEMPTS 8 PASSWORD_LIFE_TIME 32 PASSWORD_REUSE_TIME 366 PASSWORD_REUSE_MAX 12 PASSWORD_LOCK_TIME 6 PASSWORD_GRACE_TIME 8 PASSWORD_VERIFY_FUNCTION verify_function_11g SESSIONS_PER_USER unlimited CPU_PER_SESSION unlimited CPU_PER_CALL unlimited PRIVATE_SGA unlimited CONNECT_TIME 1080 IDLE_TIME 180 LOGICAL_READS_PER_SESSION unlimited LOGICAL_READS_PER_CALL unlimited; Application login CREATE PROFILE profile_app LIMIT FAILED_LOGIN_ATTEMPTS 3 PASSWORD_LIFE_TIME 999 PASSWORD_REUSE_TIME 999 PASSWORD_REUSE_MAX 1 PASSWORD_LOCK_TIME 999 PASSWORD_GRACE_TIME 999 PASSWORD_VERIFY_FUNCTION verify_function_11g SESSIONS_PER_USER unlimited CPU_PER_SESSION unlimited CPU_PER_CALL unlimited PRIVATE_SGA unlimited CONNECT_TIME unlimited IDLE_TIME unlimited LOGICAL_READS_PER_SESSION unlimited LOGICAL_READS_PER_CALL unlimited; Privileges for a standard schema owner account CREATE CLUSTER CREATE TYPE CREATE TABLE CREATE VIEW CREATE PROCEDURE CREATE JOB CREATE MATERIALIZED VIEW CREATE SEQUENCE CREATE SYNONYM CREATE TRIGGER

    Read the article

  • Apache directive for authenticated users?

    - by Alex Leach
    Using Apache 2.2, I would like to use mod_rewrite to redirect un-authenticated users to use https, if they are on http.. Is there a directive or condition one can test for whether a user is (not) authenticated? For example, I could have set up the restricted /foo location on my server:- <Location "/foo/"> Order deny,allow # Deny everyone, until authenticated... Deny from all # Authentication mechanism AuthType Basic AuthName "Members only" # AuthBasicProvider ... # ... Other authentication stuff here. # Users must be valid. Require valid-user # Logged-in users authorised to view child URLs: Satisfy any # If not SSL, respond with HTTP-redirect RewriteCond ${HTTPS} off RewriteRule /foo/?(.*)$ https://${SERVER_NAME}/foo/$2 [R=301,L] # SSL enforcement. SSLOptions FakeBasicAuth StrictRequire SSLRequireSSL SSLRequire %{SSL_CIPHER_USEKEYSIZE} >= 128 </Location> The problem here is that every file, in every subfolder, will be encrypted. This is quite unnecessary, but I see no reason to disallow it. What I would like is the RewriteRule to only be triggered during authentication. If a user is already authorised to view a folder, then I don't want the RewriteRule to be triggered. Is this possible? EDIT: I am not using any front-end HTML here. This is only using Apache's built-in directory browsing interface and its in-built authentication mechanisms. My <Directory> config is: <Directory ~ "/foo/"> Order allow,deny Allow from all AllowOverride None Options +Indexes +FollowSymLinks +Includes +MultiViews IndexOptions +FancyIndexing IndexOptions +XHTML IndexOptions NameWidth=* IndexOptions +TrackModified IndexOptions +SuppressHTMLPreamble IndexOptions +FoldersFirst IndexOptions +IgnoreCase IndexOptions Type=text/html </Directory>

    Read the article

  • What folders to encrypt with EFS on Windows 7 laptop?

    - by Joe Schmoe
    Since I've been using my laptop more as a laptop recently (carrying it around) I am now evaluating my strategy to protect confidential information in case it is stolen. Keep in mind that my laptop is 6 years old (Lenovo T61 with 8 GB or RAM, 2GHz dual core CPU). It runs Windows 7 fine but it is no speedy demon. It doesn't support AES instruction set. I've been using TrueCrypt volume mounted on demand for really important stuff like financial statements forever. Nothing else is encrypted. I just finished my evaluation of EFS, Bitlocker and took a closer look at TrueCrypt again. I've come to conclusion that boot partition encryption via Bitlocker or TrueCrypt is not worth the hassle. I may decide in the future to use Bitlocker or TrueCrypt to encrypt one of the data volumes but at this point I intend to use EFS to encrypt parts of my hard drive that contain data that I wouldn't want exposed. The purpose of this post is to get your feedback about what folders should be encrypted from the general point of view (of course everyone will have something specific in addition) Here is what I thought of so far (will update if I think of something else): 1) AppData\Local\Microsoft\Outlook - Outlook files 2) AppData\Local\Thunderbird\Profiles and AppData\Roaming\Thunderbird\Profiles- Thunderbird profiles, not sure yet where exactly data is stored. 3) AppData\Roaming\Mozilla\Firefox\Profiles\djdsakdjh.default\bookmarkbackups - Firefox bookmark backup. Is there a separate location for "main" Firefox bookmark file? I haven't figured it out yet. 4) Bookmarks for Chrome (don't know where it's bookmarks are) and Internet Explorer ($Username\Favorites) - I don't really use them but why not to secure that as well. 5) Downloads\, My Documents\ and My Pictures\ folders I don't think I need to encrypt, say, latest service pack for Visual Studio. So I will probably create subfolder called "Secure" in all of these folders and set it to "Encrypted". Anything sensitive I will save in this folder. Any other suggestions? Again, this is from the point of view of your "regular office user".

    Read the article

  • SPDIF passthrough not working in Windows 7

    - by adriangrigore
    Hi, I'm running Windows 7 on a computer with an Audigy Platinum eX sound card connected to a surround receiver via optical cabling. Sound works fine when listening to non-surround audio sources, such as windows sounds or MP3. However, when I view a DVD in Media Center and the SPDIF passthrough kicks in, I can only hear an awful noise instead of the movie soundtrack. Also, the receiver does not show the Dolby Digital or DTS symbol, but stays at Dolby Prologic, so it seems it doesn't identify the sound encoding properly. I could switch off SPDIF passthrough and use the sound card's decoder instead, but that's not an option for me since it would create more problems with regular MP3 playback via additional Stereo Receiver which is also connected to the same sound card. I've tried both the default Audigy drivers that come with Windows 7 and the latest drivers from the Soundblaster website, but the problem remains unchanged. Also, I have ensured that the receiver's Dolby Digital decoder is not broken by successfully connecting it to my PS3 to view a Dolby Digital DVD. Besides, SPDIF passthrough was working fine in Vista before I upgraded to Windows 7. Is there anything else I could try?

    Read the article

  • Password-protected sharing allows access to users who have no account?

    - by romkyns
    Running Win7 on two computers in my LAN. Computer A has password-protected sharing enabled, and shares a folder. It has a single user account "Bob", and the Guest account is turned off. The network is workgroup-based. According to the descriptions of the "password-protected sharing" I could find, the only people who can access the shared folder via the LAN are those who know the username+password for the "Bob" account. However a second computer on the LAN is able to view this shared folder by simply browsing to Computer A. They don't need to enter any passwords or anything. The only user account registered on that PC is called "Jim", and has a different password from "Bob". How on earth is computer B able to view this shared folder? Is the popular description of the "password-protected sharing" feature inaccurate / did I misunderstand it big time? P.S. There is a possibility that the password for "Bob" has been entered on that PC once, and possibly the "remember password" box was checked. I've looked in the "Credential Manager" on both computers and there is nothing saved anywhere.

    Read the article

< Previous Page | 477 478 479 480 481 482 483 484 485 486 487 488  | Next Page >