Search Results

Search found 10494 results on 420 pages for 'beyond the documentation'.

Page 196/420 | < Previous Page | 192 193 194 195 196 197 198 199 200 201 202 203  | Next Page >

  • Dell Driver Support for Latitude E6320 Windows 7 Enterprise

    - by IamPolaris
    I recently did a reinstall of Windows 7 Enterprise on a Dell Latitude E6320, which is a 64 bit system. After the install process, and doing typical Windows Update stuff, I looked at my Device Manager and found that I had devices which were missing drivers. My missing drivers: After going to the Dell Support site and looking at the files, and doing some sleuthing I found the following support document: http://downloads.dell.com/utility/Latitude%20E-Family%20%20Mobile%20Precision%20Re-Image%20How-To%20Guide%20-%20A03%20Rev%203%200.pdf This document hints in appendix C that the Broadcom USH is the Control Point Security and the Unknown device is Micro freefall sensor. The network controller is my wireless, as I cannot connect wirelessly, and the final missing driver I am not sure. Attempting to install the control point security exe on the support page will not work. After downloading, I am given the message that I am attempting to install a 32 bit driver on a 64 bit machine EVEN THOUGH I selected the win7 64 bit option from the support page. Beyond that, some of the drivers (Which are confusing to read and hard to understand what they do) and the system utilities which are supposedly supposed to make this process simpler will either a) not run because they are 32 bit exe's or b) the support page cannot find the file attempted to download. Is there anything I can do to get (at the very least) my wireless running, but idealistically all of my drivers. A solution which assumes Dell is completely incompetent would be ideal. :P Some forums have said that I should download the chipset driver, others say to get the system utility file (DSS_UTIL_WIN_R282536.EXE). I have had no luck as of yet...

    Read the article

  • OS X superuser folders automatically created. Perusers launchd process appears to kill 501

    - by Ric Pen
    New Apple laptop OSX 10.8.2. I have used OS X but many years previously, and am not familiar with subtleties or changes in com.apple.launchd.peruser.x... I have previously (and in retrospect, foolishly) made changes to these rapidly spawned new peruser accounts (my initial reaction was that if ipfw was disabled, then I might well be under hacker attack, which I have dealt with, years ago), but I believe I was wrong, and the results of my efforts at preserving the system's integrity have in fact been destructive, overreactive, and have resulted in much work to restore. My understanding from other posts is that superuser protocols have changed quite dramatically since I bought the first developer version of OS X many years ago. Haven't developed on Apple much since then, w/ exception of WebObjects (IMO, much underrated at that time, and was more user friendly than ASP (prior to .NET, I vaguely recall). Creation of apparently nasty peruser folders appear to confound 501 process, which logs inability to find firewall (ipfw). Can someone help me with this? I am concerned that either the system is improperly configured, an application was improperly installed (although there is little here beyond Apple's SDK, which I find quite accommodating and intuitive). Still, I am a novice, only sporadically develop at this time, and would really just like to see this system running happily. Please offer assistance, in the form of potential info sources, or if you have had a similar experience, then perhaps scripts to suss out this issue. I do not wish to damage the system, but Apple's Developer connection and discussion threads do not appear to have dealt with this particular issue recently... Although I may well have missed something you have not - please apprise. Any assistance on this issue is very much appreciated - by an old guy, who wants to do some things which were fun about 20 years ago.

    Read the article

  • Incorrect durations mp4 file created by ffmpeg (avconv)

    - by Ruslan Sharipov
    Example usage: avconv -i rtmp://maps.lo.ufanet.ru/live/10e227922b473e91f37474fa084107af -vcodec copy -an -sn -map 0 -f segment -segment_format mp4 -segment_time 60 -y %05d.mp4 avconv version 0.8.3-6:0.8.3-1+b1, Copyright (c) 2000-2012 the Libav developers built on Jun 15 2012 13:54:35 with gcc 4.7.0 HandShake: client signature does not match! Metadata: height 480.00 remote_addr: sdp_session {sdp_session,0, {sdp_o,"-","1289703354974145","1289703354974145",inet4, "10.1.12.99"}, "Media Presentation", {inet4,"0.0.0.0"}, {0,0}, [{"control","*"},{"range","npt=0.0 start 30400239.52 timeshift_duration 319250.58 timeshift_size 120000.00 width 640.00 [flv @ 0x1d36a40] Estimating duration from bitrate, this may be inaccurate Input #0, flv, from 'rtmp://maps.lo.ufanet.ru/live/10e227922b473e91f37474fa084107af': Duration: N/A, start: 0.000000, bitrate: N/A Stream #0.0: Video: h264 (Baseline), yuvj420p, 640x480 [PAR 1:1 DAR 4:3], 1k tbr, 1k tbn, 2k tbc Output #0, segment, to '%05d.mp4': Metadata: encoder : Lavf53.21.0 Stream #0.0: Video: libx264, yuvj420p, 640x480 [PAR 1:1 DAR 4:3], q=2-31, 1k tbn, 1k tbc Stream mapping: Stream #0:0 -> #0:0 (copy) Press ctrl-c to stop encoding ^Cframe= 9566 fps= 36 q=-1.0 Lsize= -0kB time=318.25 bitrate= -0.0kbits/s video:30348kB audio:0kB global headers:0kB muxing overhead -100.000071% Received signal 2: terminating. Result: serafim@yard:~/video2$ ls 00000.mp4 00001.mp4 00002.mp4 00003.mp4 00004.mp4 00005.mp4 Now try to play the files in the player, such as VLC. And that's what we get: the first fragment (00000.mp4) played well, no problems, but the second (00001.mp4 and beyond) starts the bug manifests itself, namely the file 00001.mp4 first 60 seconds black screen, but since 61 seconds starts playing the video. Attachments: https://dl.dropbox.com/u/760901/rtmp_and_mp4.zip How to get rid of the delay with black screen at the beginning of the segments? Maybe ffmpeg to pass parameters, or third-party software is able to correct the obtained segments mp4?

    Read the article

  • Splunk is fantastically expensive: What are the alternatives?

    - by samsmith
    This has been discussed, but it has been several months, so it may be time to revisit it: Earlier discussion RE Splunk alternatives For the record, Splunk rocks. But the pricing is simply beyond what we can consider (When I spoke with Splunk today, the cost for a system to index 5gb/day of data is over $30,000.) That is more than we spend on SQL Server (by a large multiple), more than we spend on a rack of servers (by a multiple), etc. etc. The splunk sales team is correct (that for $30K we get more value and functionality than if we spend the same building our own system), but it doesn't matter. The splunk cost is simply too high (by a multiple). Soooooo, we are looking around! Is anyone out there building a splunk like system? Our basic need: Able to listen for syslog messages on multiple udp ports Able to index the incoming data in an async way Some kind of search engine Some kind of UI An API to the search engine (to embed in our console) We currently need to index 3-5gb/day, but need to be able to scale to 10gb/day or more. We do not need a lot of history (30 days is fine). We use Windows 2008 and 2003 servers. Thanks for your thoughts!

    Read the article

  • How can I password protect an IIS directory with only FTP access?

    - by Tony Adams
    How can I password protect an IIS directory when I only have FTP access to the server? I can't adjust any IIS settings or add users or anything like that. The answer to: IIS Basic Authorization ala .htaccess/.htpasswd in apache does not help as I only have access to the server via FTP. I just need to password protect a directory. I've tried several variations of a web.config file. I can get a basic HTTP auth form to pop up when a user attempts to load a page from my test directory, but I can't configure the authentication part. The server complains that: Parser Error Message: It is an error to use a section registered as allowDefinition='MachineToApplication' beyond application level. This error can be caused by a virtual directory not being configured as an application in IIS. whenever I add an <authentication> section to my web.config. I'm grateful for any help anyone can offer. Edit: I don't know what version of IIS is running on this server, but here is the server tag from error messages: Version Information: Microsoft .NET Framework Version:1.1.4322.2490; ASP.NET Version:1.1.4322.2494

    Read the article

  • Hotmail mail delivery issue (spam)

    - by chaochito
    Hello, I am running a Postfix server in a dedicated server in a Linux environment (centOS 5.3) for a social networking web application and are experiencing deliverability issues with Hotmail (I can send mails to Gmail, Yahoo, Aol in inbox). I only send legit mails for registered users (notifications). I have SPF, DK and DKIM setup. I pass the Sender ID test when mailing to [email protected] but we have "X-Auth-Result : None" only in Hotmail headers and no X-SID-Result:Pass. We have been enrolled in their program for more than 2 weeks and normally when you apply to their Sender ID program you are supposed to have X-SID-Result:Pass and X-Auth-Result:Pass. I contacted Hotmail about the issue and they told me that my domain looks like added to Sender ID in their system this is beyond their support and asked me to contact my ISP. As you can imagine, my ISP has no clue about that either. I don't really know what could be wrong... Mails are currently filtered as spam and we would like to be able to have them landing in inbox.

    Read the article

  • SQL Server performance on VSphere 4.0

    - by Charles
    We are having a performance issue that we cannot explain with our VMWare environment and I am hoping someone here may be able to help. We have a web application that uses a databases backend. We have an SQL 2005 Cluster setup on Windows 2003 R2 between a physical node and a virtual node. Both physical servers are identical 2950's with 2x Xeaon x5460 Quad Core CPUs and 64GB of memory, 16GB allocated to the OS. We are utilizing an iSCSI San for all cluster disks. The problem is this, when utilizing the application under a repeated stress testing that adds CPUs to the cluster nodes, the Physical node scales from 1 pCPU to 8 pCPUs, meaning we see continued performance increases. When testing the node running Vsphere, we have the expected 12% performance hit for being virtual but we still scale from 1 vCPU to 4 vCPUs like the physical but beyond this performance drops off, by the time we get to 8 vCPUs we are seeing performance numbers worse than at 4 vCPUs. Again, both nodes are configured identically in terms of hardware, Guest OS, SQL Configurations etc and there is no traffic other than the testing on the system. There are no other VMs on the virtual server so there should be no competition for resources. We have contacted VMWare for help but they have not really been any suggesting things like setting SQL Processor Affinity which, while being helpful would have the same net effect on each box and should not change our results in the least. We have looked at all of VMWare's SQL Tuning guides with regards to VSphere with no benefit, please help!

    Read the article

  • VNC Address Book - Window Invisible? Working as intended?

    - by user1445967
    I have VNC Server and VNC Viewer on the same PC with Windows 7 Ultimate x64. Is VNC Address Book supposed to have a main application window? I can't tell if this is bugged or working as intended. When I start VNC Address book, it creates an item in the task list and also an icon in the notification area. If I click the task list item once, nothing happens. If I click again, it disappears from the task list but remains in the notification area**. If I right-click the icon on the notification area, there is an option 'open address book', which if chosen, creates the item in the task list again, with the exact same behavior. If I left or right click on the address book icon, I have ways to connect to servers in the address book. However I have no way to add/remove/edit servers in the address book. **Note that this behavior is identical to that of an application with a main window where there is an option to 'minimize to system tray / notification area'. Only here, no main window is visible, ever. If fixing this problem is beyond the scope of this site, that's fine, but at this point I have no real way to tell if the program is malfunctioning on my pc or if it is far more minimalist than I expected. There appears to be no forum on realvnc.com where I can ask about this, perhaps this is to prevent me from getting any help unless I pay to register?

    Read the article

  • How to get data out of a Maxtor Shared Storage II that fails to boot?

    - by Jonik
    I've got a Maxtor Shared Storage II (RAID1 mode) which has developed some hardware failure, apparently: it fails to boot properly and is unreachable via network. When powering it on, it keeps making clunking/chirping disk noise and then sort of resets itself (with a flash of orange light in the usually-green LEDs); it then repeats this as if stuck in a loop. In fact, even the power button does nothing now – the only way I can affect the device at all is to plug in or pull out the power cord! (To be clear, I've come to regard this piece of garbage (which cost about 460 €) as my worst tech purchase ever. Even before this failure I had encountered many annoyances about the drive: 1) the software to manage it is rather crappy; 2) it is way noisier that what this type of device should be; 3) when your Mac comes out of sleep, Maxtor's "EasyManage" cannot re-mount the drive automatically.) Anyway, the question at hand is how to get my data out of it? As a very concrete first step, is there a way to open this thing without breaking the plastic casing into pieces? It is far from obvious to me how to get beyond this stage; it opens a little from one end but not from the other. If I somehow got the disks out, I could try mounting the disk(s) on one of the Macs or Linux boxes I have available (although I don't know yet if I'd need some adapters for that). (NB: for the purposes of this question, never mind any warranty or replacement issues – that's secondary to recovering the data.)

    Read the article

  • How do you gracefully upgrade mission critical systems to wildly disparate systems?

    - by Ernie
    In the span of the 12+ years of my career, I have yet to overcome this hurdle and I suspect the answer simply isn't easy or even possible, so I ask everyone here for their experience. Say that you're running into egregious problems that can only be fixed by moving from one platform to another - either from making a mistake in choosing the platform that was chosen years ago, or simply growing beyond what the system was originally designed for. You know for certain that the cruft that has built up over time will invariably mean that it will be nearly impossible to test for all the things that will certainly lead to tech support hell - which we all know leads to the loss of customers. Not that customers aren't already complaining about the egregious problems that already exist! The best possible way that I've discovered so far is to maybe devise a plan for the changeover, test it on a few clients, test it on a dozen clients, test it on a hundred clients, then finally finish the changeover for everyone and pray that you've worked out all the bugs with those first hundred and twenty, and that the animal by-products will not hit the ventilation system in the most spectacular fashion possible. However, that doesn't mean that it won't anyway. So say that you're moving from Exchange to Exim (or even just Sendmail to Exim). How do you handle it?

    Read the article

  • Users and Groups management on 7 Home Premium

    - by AviD
    Recently upgraded the home pc from XP pro, to Windows 7 Home Premium. I'm looking for a solution for a few things that seem to be missing from this edition... Since Local Users and Groups is blocked on Home Premium, I can't figure out how to manage groups, or even do anything even slightly advanced to users (basically, create/group/picture is it). net localgroup, net users, net etc dont seem to work - getting "system error 5". While I'm on the topic, I cant activate (what was once) "Local Security Policy"... Looking for any help, advice, or even a new direction cuz things is differ'nt on Winnows7... To clarify, I'm looking to do some of the following, which were simply back in XP-land: remote user only (i.e. no local logon) Grant special privileges for specific user grant access to e.g. C$ share for specific remote user create custom groups for users, to be able to separate privileges of say, my wife's from my kids define quite specifically what each user can do (beyond just standard users) Harden OS (hmm, i guess maybe what i'm looking for is security hardening guide for 7...?)

    Read the article

  • Automated Syslog Error Solution Finder

    - by Dru
    Any automated syslog solution finding frameworks? I want my central syslog server to email a list of problems, their severity and suggested solutions. There have been several questions about centralising system logs and alternative log analysis systems, but I don't get the impression that any of them help with issue resolution. A little background: At work I am now literally doing the work of two people, and both jobs have expanded beyond their initial frameworks. It is not so bad as I have helpers, but they are little more than smart monkeys. While one of my predecessors [I have two, that is how I know I have the jobs of two people] set-up logwatch to email its results out, my monkeys don't have the skills necessary to identify unimportant data. This has caused all of them, and myself sadly, to setup email filters and ignore the whole thing until something goes "bang". It would be handy to have someone else tell them what is important, what is connected, and to suggest a few ways to resolve the issue (I could train then to research the solution first, ha!). My reading of the Splunk and Octopussy sites indicates that I still need to bring my own highly trained monkey to the party. Which I am several years from having.

    Read the article

  • Three server processes consume no more than 50% of Dual Core CPU

    - by thor
    I have three processes running on Intel Core 2 Duo CPU. From watching output of 'top' and graphs of CPU load (drawn by MRTG, data collection via SNMP) I can see that CPU load is never more than 50%, and, most of the day, when those processes are busy CPU load has a ceiling at 50 %. I mean, CPU load grows up to 50% in the morning and stays there until late evening. My first thought was that only one core was used at 100% thus giving 50% of both CPUs. But, as there are three processes running and from 'top' I see that both cores are being loaded, so this is not the case. schedtool shows that CPU affinity for those three processes is at default, 0x03, allowing them to use both cores. If I force one process to one core (schedtool -a 0x01), and two others to second (schedtool -a 0x02), cumulative usage grows beyond 50%. Why three processes seem to consume only 50% of two cores? Why forcing them to different CPUs allows usage to grow higher? Any hints? P.S. Processes in question are Counter-Strike servers.

    Read the article

  • Need info on scripts and Autoforward through Exchange Server in Outlook 2010

    - by user103037
    I am using the below information to auto-forward my work emails to my BB via a gmail account. The script works fine. But my work email ask's for every email to send either classifield or unclassified. What and where would I add into the below script to autoforward unclassified? I have written some VBA script to do this bypass the server's disabling of auto-forward. Basically it mimics the user forwarding the email rather than the server doing an auto-forward. It's pretty simple: Sub AutoForwardAllSentItems(Item As Outlook.MailItem) Dim strMsg As String Dim myFwd As Outlook.MailItem Set myFwd = Item.Forward myFwd.Recipients.Add "[email protected]" myFwd.Send Set myFwd = Nothing End Sub It's beyond the scope of this post to give detailed instructions, but here's a summary: Add the above code in the Visual Basic editor of Outlook (Alt-F11 should get your started). Be sure to change [email protected] to the address where you want the mail to go Tell Outlook to run this code for each inbound message (Tools - Rules and Alerts - New Rule - Check Messages when they arrive - Next - YES - Checkbox "Run a Script" - Then select the script you just created. Now Outlook should automatically forward each email you receive, but it won't be blocked by the Admin as an "Auto-forward".

    Read the article

  • RAID 10 or RAID 5 for multiple VMs - what is the best choice?

    - by Lars Fastrup
    I have just ordered a new rig for my business. We do a lot of software development for Microsoft SharePoint and need the rig to run several virtual machines for development and test purposes. We will be using the free VMware ESXi for virtualization. For a start, we plan to build and start the following VMs - all with Windows Server 2008 R2 x64: Active Directory server MS SQL Server 2008 R2 Automated Build Server SharePoint 2010 Server for hosting our public Web site and our internal Intranet for a few people. The load on this server is going to be quite insignificant. 2xSharePoint 2007 development server 2xSharePoint 2010 development server Beyond that we will need to build several SharePoint farms for testing purposes. These VMs will only be started when needed. The specs of the new rig is: Dell R610 rack server 2xIntel XEON E5620 48GB RAM 6x146GB SAS drives Dell H700 RAID controller We believe the new server is going to make our VMs perform a lot better than our existing setup (2xIntel XEON, 16GB RAM, 2x500 GB SATA in RAID 1). But we are not sure about the RAID level for the new rig. Should we go for having the the 6x146GB SAS drives in a RAID 10 configuration or a RAID 5 configuration? RAID 10 seems to offer better write performance and lower risk of a RAID failure. But it comes at a cost of less drive space. Do we need RAID 10 or would RAID 5 also be a good choice for us?

    Read the article

  • How can I implement ansible with per-host passwords, securely?

    - by supervacuo
    I would like to use ansible to manage a group of existing servers. I have created an ansible_hosts file, and tested successfully (with the -K option) with commands that only target a single host ansible -i ansible_hosts host1 --sudo -K # + commands ... My problem now is that the user passwords on each host are different, but I can't find a way of handling this in Ansible. Using -K, I am only prompted for a single sudo password up-front, which then seems to be tried for all subsequent hosts without prompting: host1 | ... host2 | FAILED => Incorrect sudo password host3 | FAILED => Incorrect sudo password host4 | FAILED => Incorrect sudo password host5 | FAILED => Incorrect sudo password Research so far: a StackOverflow question with one incorrect answer ("use -K") and one response by the author saying "Found out I needed passwordless sudo" the Ansible docs, which say "Use of passwordless sudo makes things easier to automate, but it’s not required." (emphasis mine) this security StackExchange question which takes it as read that NOPASSWD is required article "Scalable and Understandable Provisioning..." which says: "running sudo may require typing a password, which is a sure way of blocking Ansible forever. A simple fix is to run visudo on the target host, and make sure that the user Ansible will use to login does not have to type a password" article "Basic Ansible Playbooks", which says "Ansible could log into the target server as root and avoid the need for sudo, or let the ansible user have sudo without a password, but the thought of doing either makes my spleen threaten to leap up my gullet and block my windpipe, so I don’t" My thoughts exactly, but then how to extend beyond a single server? ansible issue #1227, "Ansible should ask for sudo password for all users in a playbook", which was closed a year ago by mpdehaan with the comment "Haven't seen much demand for this, I think most people are sudoing from only one user account or using keys most of the time." So... how are people using Ansible in situations like these? Setting NOPASSWD in /etc/sudoers, reusing password across hosts or enabling root SSH login all seem rather drastic reductions in security.

    Read the article

  • Automated Syslog Error Solution Finder

    - by Dru
    Any automated syslog solution finding frameworks? I want my central syslog server to email a list of problems, their severity and suggested solutions. There have been several questions about centralising system logs and alternative log analysis systems, but I don't get the impression that any of them help with issue resolution. A little background: At work I am now literally doing the work of two people, and both jobs have expanded beyond their initial frameworks. It is not so bad as I have helpers, but they are little more than smart monkeys. While one of my predecessors [I have two, that is how I know I have the jobs of two people] set-up logwatch to email its results out, my monkeys don't have the skills necessary to identify unimportant data. This has caused all of them, and myself sadly, to setup email filters and ignore the whole thing until something goes "bang". It would be handy to have someone else tell them what is important, what is connected, and to suggest a few ways to resolve the issue (I could train then to research the solution first, ha!). My reading of the Splunk and Octopussy sites indicates that I still need to bring my own highly trained monkey to the party. Which I am several years from having.

    Read the article

  • Upgrading memory in a laptop

    - by ulidtko
    I'm a bit confused about all the memory types and various bus frequencies of modern consumer PCs. Requesting expert help on the subject. So far I'm confident that: I have an Asus X51L laptop with an unknown set of configuration options. The CPU in there supports PAE, so I still have a chance to extend the memory beyond 3GiB; and the upper limit of the system is 8GiB. (?) The laptop has two SODIMM slots, one of which is occupied by a 2GiB bank, and the other one is empty. dmidecode and lshw tools consistently state 533 Mhz frequency of the bank. The last one confuses me the most. I failed to find out characteristics of the northbridge in this laptop, and still can't figure out what DDR2 to seek for. Is it DDR2-1066? Or, rather, PC2-8500/PC2-8600? Wouldn't a DDR2-800 bank harm the system's performance? Which kind of modules should I look up in stores? Update: I have bought a 2 GiB DDR2-800 SODIMM, and it seams that the system can't handle 4 GiB of memory. When installed by itself in either slot, both new and old bank (which btw happens to be marked GDDR2-677) work just perfectly; i.e. any configuration resulting in 2 GiB works. When both banks are installed though (totalling in 4 GiB), the memcheck86 tool produces horrible artifacts and crashes, and system reboots; an Ubuntu system can be started and even logged into a Unity session, but the system reboots too in this case from even a minor RAM load. So it's pretty obvious to me now that this laptop doesn't support 4 GiB of RAM or more.

    Read the article

  • Wireless signal changes from strong to weak after connecting

    - by gibberish
    Router (primary AP) is a WRVS4400N, WAP (signal booster) is a WAP4410N. Problem: User is physically located within ten feet of WAP (200 feet from main wireless router). Signal is at 5 bars as user connects to wireless network. Within seconds, signal is at or below two bars and connection is poor. Background: Trying to solve problem of weak wireless signal in back offices. Desired result is for client laptops to automatically switch to the stronger signal. WAP is connected to network via Ethernet cable. WAP is set to AP mode (instead of Wireless Repeater mode) WAP does appear to boost signal. Using Windows 7 sys tray Connect To A Network applet, can observe signal boost as laptop approaches the WAP. Above-described problem happens to users located near or beyond the WAP. It does not happen to users in close proximity to the router. Secondary Question: If using WAP in AP Mode, do WAP and Router (primary AP) need to be on the same channel?

    Read the article

  • email attachments [closed]

    - by Alan Doolan
    My company currently use software on a local machine that will take an email from the email server, extract the attachment, rename it and then add it to a folder on a webserver using ftp. This works well but they are currently asking if it can be done 'in the cloud' or what they really mean, not local. Is there any thing that would do this on the server itself? I should clarify a bit. The attachements are various reports that are being sent to different email addresses (mostly google corporate and free accounts). We need the reports to be on a folder on a webserver so that internal pages can take the information in the reports (csv) and use it on the webpages or adds them to a separate database. The key part being that the files need to be in the particular folders. Though it does work to have a computer running software that will take the files, renames them to the required name and uploads them to the folder it relies too heavily on one computer working all the time. This is not something we can depend on at this point. I'll be honest, I'm a web developer and not strong with server systems past my particular standard requirements so this is beyond me. though yes, I am aware that my boss is not 100% sure what 'cloud' means but likes the word.

    Read the article

  • Deploying multiple identical copies of a virtual machine for compute tasks

    - by Reid
    I have a compute task which has a large number of library dependencies. I would like to deploy it on some of my company's large Linux clusters, where I do not have root. I could probably track down, compile, and install the right versions of all the libraries, but this looks to be quite tedious and would have to be repeated if I deployed it again somewhere else. On the other hand, it's pretty easy to install on current Ubuntu. This led me to wonder about a virtual machine approach. Could I put together a virtual machine which booted up, ran the computation (with parameters from and results to the host), and then shut down? In other words, I'd like a command like this that I could run on the host: $ ./run-vm --ram N --task /path/on/host/foo.sh --results /another/host/dir/ This would boot the VM, run foo.sh, and put the (relatively small) results of the computation in /another/host/dir/. It's important to start up many instances of the VM simultaneously, both on a single node and multiple nodes of the cluster. So it would be nice if I didn't have to make many copies of the VM virtual disk and metadata. As the task instances are completely independent, the VMs would not need any network support once deployed, or any outside communications beyond reading and writing the host filesystem. Is this possible, and if so, how might I go about doing it? Are there assumptions I've made above which are bogus?

    Read the article

  • Clicking far away in vim in tmux in urxvt

    - by paps
    I use vim inside tmux inside urxvt, and the mouse works perfectly well for clicking and selecting text, except when I want to click too far to the right. It seems to be related to the distance in number of columns from the left. When I go beyond column ~200 (not sure about the exact number), clicking simply does nothing. Note that it's not related to a vim window: with two vim windows taking ~150 columns each, clicking will not work after the ~50th column in the second window. It's related to the whole vim session. Also note that clicking far away in a big tmux pane (200 columns) works perfectly. In my .tmux.conf I have this line: set -g default-terminal "screen-256color" and in my .vimrc I have this: if &term =~ "^screen" autocmd VimEnter * silent !echo -ne "\033Ptmux;\033\033]12;7\007\033\\" let &t_SI = "\<Esc>Ptmux;\<Esc>\<Esc>]12;5\x7\<Esc>\\" let &t_EI = "\<Esc>Ptmux;\<Esc>\<Esc>]12;7\x7\<Esc>\\" autocmd VimLeave * silent !echo -ne "\033Ptmux;\033\033]12;14\007\033\\" end It changes the cursor's color depending on the editing mode of vim, and it works, meaning that tmux really sets $TERM to "screen-256color" — but I don't know if this has any relevance with my mouse problem. I'm running Ubuntu 12.04, vim 7.3, tmux 1.6 and rxvt-unicode 9.14. Does anybody have an idea about what is causing this problem? Thanks.

    Read the article

  • Why is only one Excel spreadsheet crippled, but others are fine?

    - by Dallas
    I have an inherited spreadsheet that I really don't want to rebuild at the moment. It's a simple small workbook that is small (< 200 rows that don't even reach to AA) and does nothing more than calculate some totals within the same worksheets. No macros, no external data sources, nothing beyond basic formatting of dates, numbers and strings. I see importing data from CSV/text has created many many workbook connections over time, but even if I delete them all (there were hundreds) it makes no difference in performance. Even clicking to simply change focus from cell to cell takes 10+ seconds, adorned by the spinning cursor and (Not Responding) appending to the title bar and the application locking up. The program seems to "recover" every time, but efficiency of editing this file is obviously seriously handicapped. All other files seem fine in Excel, and other programs have no apparent performance issues. I see Excel is chewing up CPU but I'm not sure how to narrow down what process or service is "clashing" with Excel. I tried the same file on other computers and performance is fine. If I turn off all start-up services and run only Excel, performance is restored... until I start using other programs and then it bogs down again. At this point, I would entertain almost any idea, theory or suggestion that helps pinpoint, solve or work around the issue.

    Read the article

  • USB-to-Serial showing gibberish at 115200 Baud

    - by Mose
    I've got a serious problem which drives me crazy because I tried everything I could think of. First of all, I made a video: http://youtu.be/boghkuq7L_s but please read the following text for more information, not only view the video! When using a USB-to-Serial interface everything works as long as I don't go beyond 57600 Baud. At higher rates I only get giberish like this: év.­b0JNLYÆÿ¿iëd0U²(kßÞb! ú]/xscB!ï¯!BoXûÿ1ïâÖCÿ6ÌAnè*íÌC)º¿BíÞØ.C.@ÆÃwHJÂs "YE:ñ.èFðÌCÊ÷ÞÄ !x H w6@BtbHJ ̪ Ì6ì H¾a¿bH.">îvy®;f<ßBÌ p­L¨fæH­E ­þ¼MBÞI What makes the problem so strange is, I exchanged every component and the problem still presists. I tried differtent OSes (Ubuntu, WinXP, Win7, OSX 10.7) with 32 and 64 Bit. I tried USB-to-Serial interface from FTDI and Prolific. I tried reading the output from my Raspberry PI and from an Asterisk Appliance. I changed the cables and the wiring. Nothing helped. In the video I made a example with a old Notebook with native COM and put the USB-to-Serial to the same connection as "sniffer" (only Rx and GND connected) to make sure the output and everything is ok as one can see on the native port. The voltage is ok. Settings for both are 115200 Baud, 8 Bit with 1 Stop and no flow control. Native is ok. USB is messed up. I used the newest drivers and double checked all connections. I have no idea what is wrong here. As I couldn't find anyone describing problems like this I question my long experiance in computer science and think I'm doing some completly wrong... Please help :-/

    Read the article

  • Steps to debugging web site latency and timeout issues

    - by Paperjam
    I have a client who has multiple offices around the country, all of which share the same Internet connection via their WAN. One specific office for this client is experiencing severe latency and timeout issues with my web site. Most, but not all, of the latency occurs on a specific ASPX page where multiple postbacks are made while populating cascading dropdown lists (rapid form submits). The latency is sporadic and can be anywhere from a few seconds to a full timeout. There is no indication that the timeouts are occurring on the server's end. The IT guy for this client is having trouble narrowing down the problem. Since it is affecting only one location for one client, I am led to believe it is not something with my site but something specific to that location. He's measured ping times while using the site and has noticed no real variance in ping times even when the page has timed out. I believe this may be being caused by some sort of Internet filter that doesn't like rapid form submits, but beyond a hunch I haven't a clue. My question is what things should I tell the IT guy to look for? While I'm not trying to provide active tech support for this issue, I would at least like to glean an understanding of what is going on and try to offer some sort of advice. Thank you.

    Read the article

< Previous Page | 192 193 194 195 196 197 198 199 200 201 202 203  | Next Page >