Search Results

Search found 1503 results on 61 pages for 'independent'.

Page 42/61 | < Previous Page | 38 39 40 41 42 43 44 45 46 47 48 49  | Next Page >

  • Is it worth hiring a hacker to perform some penetration testing on my servers ?

    - by Brann
    I'm working in a small IT company with paranoid clients, so security has always been an important consideration to us ; In the past, we've already mandated two penetration testing from independent companies specialized in this area (Dionach and GSS). We've also ran some automated penetration tests using Nessus. Those two auditors were given a lot of insider information, and found almost nothing* ... While it feels comfortable to think our system is perfectly sure (and it was surely comfortable to show those reports to our clients when they performed their due diligence work), I've got a hard time believing that we've achieved a perfectly sure system, especially considering that we have no security specialist in our company (Security has always been a concern, and we're completely paranoid, which helps, but that's far as it goes!) If hackers can hack into companies that probably employ at least a few people whose sole task is to ensure their data stays private, surely they could hack into our small business, right ? Does someone have any experience in hiring an "ethical hacker"? How to find one? How much would it cost? *The only recommendation they made us was to upgrade our remote desktop protocols on two windows servers, which they were able to access because we gave them the correct non-standard port and whitelisted their IP

    Read the article

  • Sun Grid Engine (SGE) / limiting simultaneous array job sub-tasks

    - by wfaulk
    I am installing a Sun Grid Engine environment and I have a scheduler limit that I can't quite figure out how to implement. My users will create array jobs that have hundreds of sub-tasks. I would like to be able to limit those jobs to only running a set number of tasks at the same time, independent of other jobs. Like I might have one array job that I want to run 20 tasks at a time, and another I want to run 50 tasks at a time, and yet another that I'm fine running without limit. It seems like this ought to be doable, but I can't figure it out. There's a max_aj_instances configuration option, but that appears to apply globally to all array jobs. I can't see any way to use consumable resources, as I'd need a "complex attribute" that is per-job, and that feature doesn't seem to exist. It didn't look like resource quotas would work, but now I'm not so sure of that. It says "A resource quota set defines a maximum resource quota for a particular job request", but it's unclear if an array job's sub-tasks' resource requests will be aggregated for the purposes of the resource quota. I'm going to play with this, but hopefully someone already knows outright.

    Read the article

  • SSH sometimes screws up connection when terminal overflows?

    - by SeveQ
    I've got a problem with SSH on a Debian Lenny based server (it's a vHost within a Xen environment, booted on a Xen kernel). I hope someone can help me with this. The SSH connection seems somehow getting screwed up frequently when the terminal overflows (new lines beyond the bottom of the terminal, usually forcing it to scroll). The connection gets lost but not regularly disconnected. It nearly always happens when I do the following: an existing SSH connection gets disconnected (regularly) I order putty to reestablish the connection login-prompt appears at the very bottom of the putty terminal window I enter my login-name, press the enter key I'm asked for the password, I enter it, press the enter key and BOOM! Nothing more happens. I have to reconnect again. So it is reproducable. I'm not totally sure if the connection crashes before or after I enter the password. Furthermore it also happens when there is much text to be displayed (for example when I compile something or do an ls -l on a directory with many entries). Using 'screen', however, helps to reduces the frequency of occurence but doesn't solve the problem completely. It's occurence is independent from which terminal software I use. I mostly use putty but it also happens with other clients. I certainly hope somebody can help me solving this problem. Thanks in advance! //edit: I've just made a Wireshark trace of the ssh connection and there is nothing, I repeat, nothing different between the working and the failing connection (at least aside from frame numbers, ports and times that obviously can't be equal). This leads me to the assumption that the error has to happen on the server's side.

    Read the article

  • Juniper’s Network Connect ncsvc on Linux: “host checker failed, error 10”

    - by hfs
    I’m trying to log in to a Juniper VPN with Network Connect from a headless Linux client. I followed the instructions and used the script from http://mad-scientist.us/juniper.html. When running the script with --nogui switch the command that gets finally executed is $HOME/.juniper_networks/network_connect/ncsvc -h HOST -u USER -r REALM -f $HOME/.vpn.default.crt. I get asked for the password, a line “Connecting to…” is printed but then the programm silently stops. When adding -L 5 (most verbose logging) to the command line, these are the last messages printed to the log: dsclient.info state: kStateCacheCleaner (dsclient.cpp:280) dsclient.info --> POST /dana-na/cc/ccupdate.cgi (authenticate.cpp:162) http_connection.para Entering state_start_connection (http_connection.cpp:282) http_connection.para Entering state_continue_connection (http_connection.cpp:299) http_connection.para Entering state_ssl_connect (http_connection.cpp:468) dsssl.para SSL connect ssl=0x833e568/sd=4 connection using cipher RC4-MD5 (DSSSLSock.cpp:656) http_connection.para Returning DSHTTP_COMPLETE from state_ssl_connect (http_connection.cpp:476) DSHttp.debug state_reading_response_body - copying 0 buffered bytes (http_requester.cpp:800) DSHttp.debug state_reading_response_body - recv'd 0 bytes data (http_requester.cpp:833) dsclient.info <-- 200 (authenticate.cpp:194) dsclient.error state host checker failed, error 10 (dsclient.cpp:282) ncapp.error Failed to authenticate with IVE. Error 10 (ncsvc.cpp:197) dsncuiapi.para DsNcUiApi::~DsNcUiApi (dsncuiapi.cpp:72) What does host checker failed mean? How can I find out what it tried to check and what failed? The HostChecker Configuration Guide mentions that a $HOME/.juniper_networks/tncc.jar gets installed on Linux, but my installation contains no such file. From that I concluded that HostChecker is disabled for my VPN on Linux? Are the POST to /dana-na/cc/ccupdate.cgi and “host checker failed” connected or independent? By running the connection over a SSL proxy I found out that the POST data is status=NOTOK (Funny side note: the client of the oh-so-secure VPN does not validate the server’s SSL certificate, so is wide open to MITM attacks…). So it seems that it’s the client that closes the connection and not the server.

    Read the article

  • Adding Netem Filter Rules

    - by fontsix
    iam new in programming and using linux. My Question is, is it possible to add Netem Filter Rules later ? I want to create an PHP-Interface for Netem and I don't know how much filters were required. This should be some kind of dynamically. In Example : A user with a static IP starts an Netem Command (Latency) with PHP Interface this means these five command werde executed by php in the first step $classid = 11; $handle = 10; "sudo tc qdisc add dev eth0 handle 1: root htb"; "sudo tc class add dev eth0 parent 1: classid 1:1 htb rate 100Mbps"; "sudo tc class add dev eth0 parent 1:1 classid 1:$classid htb rate 100Mbps"; "sudo tc qdisc add dev eth0 parent 1:$classid handle $handle: netem delay 100ms"; "sudo tc filter add dev eth0 protocol ip parent 1:0 prio 3 u32 match ip dst $dest flowid 1:$classid"; Now, if there would be a second user who wants to use Netem independent of the first user, i only want to execute the last 3 commands, like "sudo tc class add dev eth0 parent 1:1 classid 1:$classid htb rate 100Mbps"; "sudo tc qdisc add dev eth0 parent 1:$classid handle $handle: netem delay 100ms"; "sudo tc filter add dev eth0 protocol ip parent 1:0 prio 3 u32 match ip dst $dest flowid 1:$classid"; There is an Algorithmus for increasing variables $classid and $handle. This should work. Now my Question: Is it possible only to add these 3 commands to add a new class with new qdisc and a new filter rule ? Or how can i realize it ? The Apache Error_log tells me "sh: line 1: flowid: command not found" but i can't find any mistake. I hope you could help Best regards fontsix

    Read the article

  • Getting rid of your server in a small business environment

    - by andygeers
    In a small business environment, is it still necessary to have a central server? Speaking for my own company (a small charity with about 12 employees) we use our server (Windows Server 2003) for the following: Email via Microsoft Exchange Central storage Acting as a print server User authentication / Active Directory There are significant costs associated with running a server like this: Electricity, first for the server itself then for the air conditioning required (this thing pumps out a lot of heat) Noise (of which there is a lot) IT support bills (both Windows Server and Exchange are pretty complicated, and there are many ways they can go wrong) I've found ways to replace many of these functions with cheaper (better?) alternatives: Google Apps / GMail is a clear win for us: we have so many spam related problems it's not even funny, and Outlook is dog slow on our aging computers You can buy networked storage devices with built in print servers, such as the Netgear ReadyNAS™ RND4210 that would allow us to store/share all of our documents, and allow us to access printers over the network The only thing that I can't figure out how to do away with is the authentication side of things - it seems to me that if we got rid of our server, you'd essentially have a bunch of independent PCs that had no shared pool of user accounts / no central administrator. Is that right? Does that matter? Am I missing any other good reasons to keep a central server? Does anybody know of any good, cost-effective ways of achieving the same end but without the expensive central server?

    Read the article

  • GtkView GtkSpell and myspell backend

    - by justadreamer
    Hi I have an IM client pidgin. It is launched under locale ru_RU.UTF-8 Then when I type messages in GtkView widget it highlights the misspelled words. However it uses a GtkSpell which uses enchant, which uses myspell backend (I provided it with symlink to openoffice dictionaries folder /usr/share/enchant/myspell -- /usr/share/myspell/dict). Now the problem is that whichever language I use - it still uses ru_RU language to select the dictionary, so all english text is being underlined as a misspelled text. When I switch locale to en_US and then launch pidgin under it - then all russian text becomes misspelled. I don't like this behaviour as I use pidgin to chat in both languages. Is there a way to somehow setup enchant/myspell so that it searches in both dictionaries en_US and ru_RU independent of the locale I launched pidgin (GtkView) in ? I have Debian lenny/5.0.1 distro. enchant version 1.5.0. /usr/share/enchant/enchant.ordering looks like this: *:myspell meaning that for all languages backend is myspell myspell dictionary.lst file looks like this: DICT en GB en_GB DICT en US en_US THES en US th_en_US_v2 THES en GB th_en_US_v2 THES ru RU th_ru_RU_v2 DICT ru RU ru_RU DICT uk UA uk DICT ru RU en_US

    Read the article

  • Can a folder on a NAS be made available as a physical drive in VMWare?

    - by asbjornu
    We are currently in the process of moving from a single web server to two load balanced web servers and are facing some challenges we don't quite know how to fix. One of these is that the current single server hosts applications that write stuff to disk. The applications running on the server expects that when something is written to disk it later will in fact exist, so it's important that this premise is fulfilled with the dual server architecture as well. The dual server setup is a couple of VMWare instances with Windows Server 2008 R2 as the guest operating system. Out of the box, these instances does not share any kind of file system, so just moving the applications over would make them break since one instance would write something to the file system that doesn't exist on the other. Thus we need to share a file system between the two virtual servers. Our host has proposed to create a network share on a SAN and map this share individually on each virtual machine. This doesn't work too well due to NTFS permissions, etc., because the share needs to be accessed by several independent web applications that won't even be in the same application pool. The only solution that kind of works is to hard code an "identity" for each web application into its web.config file, but this means password in clear text which doesn't sit well with me. Since the servers are virtual, I'm thinknig: Wouldn't it be possible to make a NAS area available as a physical disk in the gues operating system somehow? Since VMWare has full control of the virtual hardware, you'd think it would be able to "fake" a local hard drive in the virtual machine that in reality is a folder on a NAS, but so far I haven't found anything that states how and if this is possible. So I have to ask the wonderful Server Fault community: Can a folder on a NAS be made available as a physical drive (typical D:) in both of the virtual machines?

    Read the article

  • Turn Excel spreadsheet into a formula

    - by ?????? ??????????
    I have an Excel spreadsheet that has a complex computation that is not trivial to turn into a macro or a single-cell formula. The spreadsheet has a about 10 different inputs (values a human enters in different cells of the spreadsheet) and then it outputs 5 independent calculations (in different 5 cells) based on that input. There calculation is using some pre-entered data in the spreadsheet (about 100 different constants) and doing some look-ups on them. Now I would like to use this whole spreadsheet as a formula on a different spreadsheet to calculate a set of input values and produce the corresponding set of output values. Imagine this as creating different table with 10 columns for the input variables and 5 columns for the outputs, then copying each input into the other spreadsheet and copying back the output in the results table. For instance: - A1, A2, A3,... A10 are cells where someone enters values - through a series of calculations B1, B2, B3, B4 and B5 are updated with some formulas Can I use the whole series of calculations from A1..A10 into B1..B5 without creating one massive huge formula or a VBA macro? I want to have a set of input values in 100 rows from A100, B100, C100,... J100 onward. Then do some Excel magic that will: 1. copy the values from A100...J100 into A1 to A10 2. wait for the result to appear in B1 to B5 3. copy the values from B1 to B5 into K100 to O100 4. repeat steps 1 to 3 for all rows from 100 to 150

    Read the article

  • Windows is very slow with my new SSD

    - by Maksym H.
    I have a laptop HP probook 4520s with Core i5 M480 @ 2.67Ghz, 4Gb RAM, 640 GB HDD Radeon HD 6370m 1GB video card. It would seem like a good stack for work, right? But My HDD has crashed after everyday walking with laptop about 1 year. After buying my new SSD (Patriot memory - Torqx II 128 Gb SATA II) and installing new Windows 8 from scratch - it was amazing fast. But I had only install windows updates, and I feel that the speed become the same as my old HDD, after install other software for my work, it becomes so slow, so when I use my PC with old lower configuration and it really works better than my awesome laptop... I checked that TRIM and AHCI mode are turned on. So why's that? I asked for help in Patriot Memory support, they suggested to send them ATTO test results, done, sent. Here is the response: "Thank you very much for the attached results. Looking at the results, I can see that your SSD speed is a lot lower than it should be. Can you tell me your system specs?" Until they checked my email, I re-installed Windows 8 to Windows 7 and it was again perfect, but the story repeats it becomes slower and slower after every installing new software. Check out some screenshots.. (sorry for the screenshot with russian TaskManager, I hope you will recognize those parameters accordingly with your english or other lang TaskManager) So the main issue that something everytime loads the disc on 100% and the response time is jumping around 1000-3000 ms. Why am I asking about Windows? Because I tried to install Linux Mint (x86) and It just flies. So great performance independent on how many programs I have installed. Only Windows (any 7 or 8) has this problem. So guys, I appreciate any ideas about how to fix that and may be answers of main question - "why is it so.?" Thanks!

    Read the article

  • Reliable custom Windows shortcut keys?

    - by Peter Baer
    I have global Windows shortcut keys assigned to several different cmd.exe instances. I do this by creating shortcuts to cmd.exe on my desktop, and assigning each one a unique shortcut key (for example, CTRL + SHIFT + U). Pretty basic stuff. I'm using Win2K8 (R1 and R2). This works just fine... most of the time. But with infuriating regularity, sometimes it doesn't. Or it will work with a long delay (many seconds). It doesn't matter what app currently has focus (it can even be one of the command prompts). It doesn't matter what keys I assign (I've tried a few variations of WIN, CTRL and SHIFT). I did notice that this is often, but not always, correlated with explorer.exe struggling in some way or another (say, an explorer window opened to a file share that's unavailable, or an app being unresponsive, or whatever). In other words the shortcut key handling appears to be very sensitive to unrelated system activity. Note that whenever I have this problem I can always successfully ALT + TAB to the window I want to get to, but that's tedious. I use the shortcuts to these command windows hundreds of times a day so even a 1% failure rate becomes really annoying. Is there a way to fix this, or is there some third-party utility out there that will RELIABLY intercept custom key combinations to bring focus to whatever apps I want, in a way that is independent of other system activity? ADDENDUM: There is a property of the Windows shortcuts that I would not want to lose if switching to a third-party hotkey tool: Windows shortcuts are idempotent. Once you've launched a shortcut to some app, pressing the shortcut key combo again takes you to the already launched process - it does not launch a new process.

    Read the article

  • Sending mail results in "Sender address rejected: Domain not found"

    - by user1281413
    The setup: WHM/CPanel CentOS 5 server running Exim and Courier for mail services, and BIND for domain name services. I recently moved servers. The old server was running a HIGHLY similar configuration, and all accounts were ported via WHM. However, the server is unable to send, and sometimes receive email. Errors I am seeing (when I do get an error mail back) state: 450 4.1.8 : Sender address rejected: Domain not found Edit for clarity: this is the error response from remote mail servers. Numerous independent mail servers come back with the same error. (Email address is merely one valid example) My first instinct of course was to check the domain records. However, k-t.org appears to have a valid record (including an MX record), even after running it through domain checks on a completely different server elsewhere and online. Note that the issue appears to happen with all the domains hosted on the server, not just k-t.org I have also ensured that a PTR was created. My Googling has only lead me to people who had fairly basic DNS mistakes, but either I'm blind/dumb (possible, DNS is not my strong suite), or it's something that is a bit more archaic. I've run out of ideas, and I can't seem to find anything that could explain why servers are unable to resolve the domains. There doesn't seem to be anything missing or incorrect.

    Read the article

  • In Icinga (Nagios), how do I configure hosts with multiple IPs?

    - by gertvdijk
    I'm setting up Icinga (Nagios fork) and I have some machines with multiple interfaces. Some services are only listening on one of them and to check them correctly, I like to know if it's possible to have multiple IP addresses configured for a single host in Icinga. Here's a minimal example: Remote Server: eth0: 1.2.3.4 (public IP) eth1: 10.1.2.3 (private IP, secure tunnel) Apache listening on 1.2.3.4:80. (public only) OpenSSH listening on 10.1.2.3:22. (internal network only) Postfix SMTP listening on 0.0.0.0:25 (all interfaces) Icinga Server: eth0: 10.2.3.4 (private IP, internet access) Now if I define a host: define host { use generic-host host_name server1 alias server1.gertvandijk.net address 10.1.2.3 } This will not check the HTTP status correctly. And defining an additional host: define host { use generic-host host_name server1-public alias server1.gertvandijk.net address 1.2.3.4 } will check everything, but shows up as two independent hosts. Now I want to 'aggregate' these two hosts to show up as a single host, yet providing an easy configuration to check the services on their proper address. What is the most elegant number-of-configuration-lines-saving solution to this? I read about several plugins available to workaround this, but I can't figure out what is the current way to address it. Solutions go back to 2003, but I'm running Icinga 1.7.1, already capable of the address6 option, yet that triggers IPv6-only resolving on the hostname... Ideally, I wish to configure Icinga to be intelligent enough to know that the Postfix instance running on 10.1.2.3:25 is the same as 1.2.3.4:25 and thus not triggering two alarms. I guess this must have been tackled before and sysadmins have it set up now. Please share your solution to this. Thanks! :)

    Read the article

  • Connecting a laptop to a TV via HDMI

    - by Madmartigan
    I just bought a new Dell XPS17 laptop (Win7) that only has HDMI output. My last 2 laptops had VGA, which I used to connect to my Sony Bravia 32" TV with no issues, but with the HDMI it's been quite a headache. Drivers for display adapters have been updated to the latest versions: Intel(R) HD Graphics Family NVIDIA GeForce GT 550M I went to a store and plugged in to 4 different TVs from different manufacturers. A sales rep and I spent about 30 minutes being baffled by the results (which are the same as my current TV): Extreme buggy behavior in the Nvidia and Windows display/resolution control panel Can not extend or duplicate displays, can only select one Third and fourth output devices "randomly" detected by the Windows control panel Could not get the screen to fit the output (edges cut off on all sides by about a half inch) Resolution and colors less than perfect. Artifacts around text. Display "randomly" cuts out Defaults to TV output only when plugged in Can not change resolution on either device when connected No audio from the TV Plugged in to 3 monitors from different manufacturers: Defaults to duplicated displays when plugged in Everything works perfectly So far, four people have gone through all the settings in the latop with no luck. I had similar, but not exactly matching results with a different laptop. I'm using the Sony Bravia currently at home, but in order to get it to work I have to turn on the laptop, wait until the display shows up on it, close the lid, then cycle through each output channel on the TV until I come back around to the HDMI port again, but still I have the symptoms described above. However: Once in a while, it just works. Sometimes, seemingly randomly, the output fits the screen perfectly. Sometimes the audio comes through the speakers too, but not always. Usually my screen saver "Mystify" will come up with a message that it cannot be displayed due to a limitation of the video card, but then sometimes it works fine. These 3 things seem to be independent of each other and don't always happen together. So, is there any way to get the laptop to output correctly to a TV, or is it just not meant to be?

    Read the article

  • Why are ISP's installing routers on my site when the feed is a form of ethernet already?

    - by Cosmin Prund
    I'm connected to 3 ISP's right now. Two of them already have routers at my site, the third one announced me "they need to install some equipment" when I requested BGP session. I can only assume they need to install a Router, since that connection is now working fine, using the usual /30 net block for the connection, and the "last-mile" solution is not going to change since they only installed it last week and the BGP was in the contract from the beginning. I simply don't understand this: the "feed" is already a form of ethernet. Even those they're using different technologies for the last mile, they're all entering the ISP router using an RJ45 WAN port. I assume the ISP router does something really important that can't be done by the Big Router on the other end of the connection. It must also be something that can hurt them if miss-configured, since they don't trust us (the client) to do the stuff on our router. And I'm not talking cheap throw-away routers here: One of the routers is Cisco 2800. Edit to add network details: I'm connected to 3 ISP's, two over Radio links, one over Fiber Optic. One of the radio links is going to get dropped and the other radio link will be turned into fiber sometime next year. The fiber is 20 Mbit, radio 1 is 40 Mbit and radio 2 is 2 Mbit. I've got a /24 of provider independent address space. I'm not doing out-of-the ordinary stuff with my network, I'm overly connected because my network needs to be "up" all the time.

    Read the article

  • Need a place to store a few bytes of meta information on storage media

    - by Jason C
    I'm working on an embedded project. I need a place to store some filesystem-independent meta information on a storage device. The device has an MSDOS partition table. The device also may have unallocated space (depending on its size) but it will be TRIMmed (and also may be blown away by new partitions in the future). I need a location on the device that is not unallocated and that has a low risk of being touched (outside of completely erasing the device). The device is only guaranteed to have an MBR at the point the meta data needs to first be written; meaning there are no EBRs/VBRs present that I could use. There are 446 bytes at the very start of the device available for MBR bootstrap code. Currently my only idea is to store data at the end of this block. However, the device is bootable and I have no way of knowing if I'd be blowing away bootstrap code or not. The sector size is 512 bytes and the MBR is the first sector, I'm pretty sure (correct me if I'm wrong) that that means the second sector is available for use by partition data, so I can't use that either. Does anybody have any ideas? I need 4 bytes of space.

    Read the article

  • Simple electric DC question. Currency consumption

    - by Bobb
    Suppose you have DC power supply and a consumer connected to it (i.e. computer PSU and a hard drive). Suppose PSU which was supplied with the consumer has output 5V 1A. So I assume that the consumer should not consume more than 1A. Suppose the original PSU is broken now and I want to replace it with the one I have which is 5V 10A. My guess is that current is something which depends on the consumer. So if the consumer consumes normally 1A then it will not consume more than that even if it is connected to 10A PSU. In other word - am I right assuming that the consumer will not burn out being connected to a power supply with higher current output? P.S. my understanding is that voltage is something independent from the consumer. If you give it higher voltage it will burn (voltage is from PSU to the consumer). However current must be in opposite - consumer sucks as much current as it need not as much as PSU can provide (of course given that max PSU current is greater than the consumer needs)

    Read the article

  • Routing and authenticating all access through squid

    - by Knight Samar
    Hi, I want to route all Internet access in my network through a Squid proxy server and authenticate and log all users. I want this to be a client-independent setting so that no one needs to do anything on their browsers or machines. I have set my network gateway as the proxy server so that all traffic will be sent to it. I have done this using options in DHCP server. Now I tried using squid as a transparent proxy, but then it won't authenticate in that mode. I tried using iptables to route all traffic to port 3128 but it won't popup the authentication dialog box from SQUID. I tried telling DHCP to give WPAD to all clients by placing a WPAD file on a webserver containing the following for automatic proxy configuration on clients: Changes in dhcpd.conf option wpad code 252 =test; option wpad "\n\000"; option wpad "http://192.168.1.5/wpad.dat\n"; The WPAD file: function FindProxyForURL(url,host) { return "PROXY squid-server-ip-address:3128 ; DIRECT "; } But the browsers (different versions of Firefox and IE) seem to ignore it. :( What should I do ?

    Read the article

  • Simple electric DC question. Current consumption

    - by Bobb
    Suppose you have DC power supply and a consumer connected to it (i.e. computer PSU and a hard drive). Suppose PSU which was supplied with the consumer has output 5V 1A. So I assume that the consumer should not consume more than 1A. Suppose the original PSU is broken now and I want to replace it with the one I have which is 5V 10A. My guess is that current is something which depends on the consumer. So if the consumer consumes normally 1A then it will not consume more than that even if it is connected to 10A PSU. In other word - am I right assuming that the consumer will not burn out being connected to a power supply with higher current output? P.S. my understanding is that voltage is something independent from the consumer. If you give it higher voltage it will burn (voltage is from PSU to the consumer). However current must be in opposite - consumer sucks as much current as it need not as much as PSU can provide (of course given that max PSU current is greater than the consumer needs)

    Read the article

  • Configuring Vmware virtual machines to run under different IPs and PC specs

    - by Alex
    Right now I'm using a simple VmWare virtual machine with preinstalled Win 7. The IP is assigned automatically (it's the same as main OS IP). Is it possible to create several virtual machines that have different hardware specifications and different IP addresses? Here is what I mean regarding these issues: Specs: Certainly, you can easily change some specifications in the Settings menu (RAM size, HDD size), but what about advanced settings? For example: advanced settings for the Processor: is it AMD (2500+,4000+, etc.. ) or Intel (core 2, Pentium, etc..) Ram - is it Corsair 4 Gb 1333 Mhz or Kingston 2 x 2 Gb 866Mhz or something else? Hdd - Is it Seagate Barracuda 80 gb 5400 Rpm or is it Samsung 500Gb 7200 Rpm or some random SSD? Programs that work under a Virtual Machine shouldn't have a clue if that's a VmWare or not. IPs: Every program that's launched under main OS use the real IP: 93.56.xx.xx All programs that are launched under virtual machine A use IP 1: 74.78.xx.xx All programs that are launched under virtual machine B use IP 2: 84.159.xx.xx I believe that you have to use either VPN or Proxy to solve this problem. The Sum Up: The idea is to create 2-3 independent virtual machines with different hardware specifications and IP addresses. Programs that work under a certain Virtual Machine shouldn't have a clue if that's a VmWare or the real PC. Any ideas/tips or experience regarding configuration will be appreciated!

    Read the article

  • file system that allow to specify different RAID level per directory and change it afterward

    - by Adam Ryczkowski
    I have 5 hard drives, where I want to keep my data. Some of my files are more important, and some of them are less. So some of them I wish to put on RAID-6, and for some it RAID-5 is sufficient. It is difficult to predict at the moment of creation of the arrays how much space of each type to declare. What I would do if I didn't hear about zfs, is partition the hard drives into identical 100GB partitions, and as my needs grow, assemble those partitions into md devices using linux-raid. Then, I'd combine those devices using lvm into logical volumes where I'd put my data. So when I'd need more space of e.g. RAID-6, I'd take 100GB partition from each hard drive and assemble them into another RAID-6 md device and would use it as physical storage for the logical volume group dedicated for RAID-6 data. Then I could grow the file system on this logical volume. On top of RAID-6 and RAID-5 Volume Groups (managed by lvm) would reside completely independent file systems, which I'd later merge with multiple mount --bind into a single directory structure that would reflect the logical structure of data rather that of the storage. But now, when I heard about the ZFS with all the performance, data-healing and compression capabilities I cannot stop thinking if it can help me. If so, what do you think would be the best setup?

    Read the article

  • Deploying multiple identical copies of a virtual machine for compute tasks

    - by Reid
    I have a compute task which has a large number of library dependencies. I would like to deploy it on some of my company's large Linux clusters, where I do not have root. I could probably track down, compile, and install the right versions of all the libraries, but this looks to be quite tedious and would have to be repeated if I deployed it again somewhere else. On the other hand, it's pretty easy to install on current Ubuntu. This led me to wonder about a virtual machine approach. Could I put together a virtual machine which booted up, ran the computation (with parameters from and results to the host), and then shut down? In other words, I'd like a command like this that I could run on the host: $ ./run-vm --ram N --task /path/on/host/foo.sh --results /another/host/dir/ This would boot the VM, run foo.sh, and put the (relatively small) results of the computation in /another/host/dir/. It's important to start up many instances of the VM simultaneously, both on a single node and multiple nodes of the cluster. So it would be nice if I didn't have to make many copies of the VM virtual disk and metadata. As the task instances are completely independent, the VMs would not need any network support once deployed, or any outside communications beyond reading and writing the host filesystem. Is this possible, and if so, how might I go about doing it? Are there assumptions I've made above which are bogus?

    Read the article

  • How can I monitor network traffic?

    - by WIndy Weather
    I have a home network with about 10 devices including BluRay player [netflix] and both windows and linux machines. I need to collect network traffic statistics so that if questions come up about how much traffic I'm using I have the answer independent of my ISP. I've looked at DD-WRT, but I see that even buying a new router that will be supported is a problem since I might get the wrong version of the hardware. I have a DIR-655 and a DIR-501 - neither of which is supported. I don't mind buying new hardware, but it looks like a crap-shoot to get one that will work. DD-WRT looks like a bad solution unless someone knows of a place to get a router that is guaranteed to work. Does someone know of an arduino or other SBC solution? I have plenty of NAT routers already, so I just need traffic statistics for external traffic. The network is GBit Ethernet inside and Cable / soon to be DSL outside. The DIR-655 only gives me "packets", not bytes transferred oddly enough. Thanks, ww

    Read the article

  • Thunderbird: export email account settings

    - by zpea
    I'd like to create a new profile for Thunderbird using the same mail accounts I already configured in my old profile. As it is quite a number of accounts, it would be great to have a way to export/import them instead of writing down the settings just to fill in again in the new profile. Using web search and search here I mainly found following suggestions that do not match what I need: Copy the whole profile: Not possible for me as I don't want to copy other settings, the downloaded mail data etc. and the old profile broke when running out of space in the home folder anyway. Use mozBackup: There seem to be several programs by that name (forks?). In any case, it's Windows-only and hence no option (I am mostly on Linux and prefer platform-independent solutions anyway) Use accountex: Seems to do what I want, but it is not compatible with current Thunderbird version (supports only up to version 3.1) Posts with various tips from 4 years ago: Top results in the web search with the G. But they do not work in current versions of Thunderbird either. Did I overlook anything? After all, it doesn't sound like I was looking for something nobody ever looked for.

    Read the article

  • Splitting an HTTP request into multiple byte-range requests

    - by redpola
    I have arrived at the unusual situation of having two completely independent Internet connections to my home. This has the advantage of redundancy etc but the drawback that both connections max out at about 6Mb/s. So one individual outbound http request is directed by my "intelligent gateway" (TP-LINK ER6120) out over one or the other connection for its lifetime. This works fine over complex web pages and utilises both external connects fine. However, single-http-request downloads are limited to the maximum rate of one of the two connections. So I'm thinking, surely I can setup some kind of proxy server to direct all my http requests to. For each incoming http request, the proxy server will issue multiple byte-range requests for the desired data and manage the reassembly and delivery of that data to the client's request. I can see this has some overhead, and also some edge cases where there will be blocking problems waiting for data. I also imagine webmasters of single-servers would rather I didn't hit them with 8 byte-range requests instead of one request. How can I achieve this http request deconstruct/reconstruction? Or am I just barking mad?

    Read the article

< Previous Page | 38 39 40 41 42 43 44 45 46 47 48 49  | Next Page >