Search Results

Search found 1685 results on 68 pages for 'no more guessing'.

Page 43/68 | < Previous Page | 39 40 41 42 43 44 45 46 47 48 49 50  | Next Page >

  • How do I diagnose a bottleneck in an Intel Atom based Ubuntu server?

    - by Jon Cage
    I have a small media server at home which has software raid and a gigabit link to the rest of my network. For some reason though, I only get ~10MB/s transfers when copying to/from the server. I use software RAID5 (mdadm) over 4 1TB disks. On top of that I then use LVM to give me a huge pool of disk space which is then split up into multiple partitions which can be resized as and when they need it. I'm guessing this it most likely the cause, but I'd like to know for sure where the root cause is. So, how can I benchmark network throughput (Windows 7 desktop <- Ubuntu server) and hard disk performance to try and identify where my bottleneck might be? [Edit] If anyone's interested, the motherboard is an Intel Desktop Board D945GCLF2. So that's a 300 series Atom processor with the Intel® 945GC Express Chipset [Edit2] I feel like such a fool! I just checked my desktop and I had the slower of the two onboard NICs plugged in so the server is probably not at fault here. Transferring a copy of ubuntu off the server I get ~35-40MB/s according to Windows 7. I'll do those HD tests when I get a chance though (just for completeness).

    Read the article

  • ssh connection slow when using @hostname.com but now when using @ipaddress

    - by Alex Recarey
    When connecting to a Debian server using ssh, if I use [email protected] (the IP address of hte server) the connection is instant. If however I use [email protected] (a DNS redirected to the IP address of the server) the ssh connection hangs for a 20 seconds before connecting successfully. The ssh logs show the following: [alex@alex home]$ ssh -v -v [email protected] OpenSSH_5.5p1, OpenSSL 1.0.0c-fips 2 Dec 2010 debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug2: ssh_connect: needpriv 0 and here it hangs during 20 seconds before continuing. I think it might have something to do with reverse DNS or similar (the server does not really "know" it's name is hostname.com, it just has that DNS rediriected to its IP address). I have added the following options to /etc/ssh/sshd_config: UseDNS no GSSAPIAuthentication no to no effect. The server's DNS records in /etc/resolv.conf are configured correctly: ping hostname.com PING sub.domain.com (X.X.X.X) 56(84) bytes of data. 64 bytes from replicant (X.X.X.X): icmp_seq=1 ttl=64 time=0.029 ms 64 bytes from replicant (X.X.X.X): icmp_seq=2 ttl=64 time=0.050 ms?s Thanks for the help. Solution: It seems the DSL router my ISP saddled me with was causing the trouble. Changing my DNS server from 192.168.1.1 (router's IP) to google's (8.8.8.8, always good to know when you are in a hurry) instantly solved the connection delay problem. I am guessing that the 50€ router provided does not cache DNS entries, although I don't understand why pinging the DNS address had no delay, and 20 seconds is too long of a wait, even for uncached DNS. Tnanks again for the help!

    Read the article

  • 403 Forbidden when Deploying asp.net 4.0 site to IIS 7

    - by Jordan
    So I have an EC2 instance running, the URL NoWeatherSurprises.com I have the DNS pointing there, and I set up a new site in IIS 7 and pointed it to a folder. I used Visual Studios Web Developer 2010 express to publish to this folder. It now has the binaries and such. However if I go to NoWeatherSurprises.com I get the welcome to IIS 7 screen. I'd expect to go to my application If I navigate to http://noweathersurprises.com/weather/ [weather was the folder I published to under wwwroot] I get a 403 forbidden. I have no idea why, I am guessing that it is trying to do a directory listing or something instead of launching my MVC Application. So 2 problems in summary. It is not pointing the domain to the folder directly and I need to add /weather I am getting a 403 forbidden instead of the results of my home controller with the index action. I am new to IIS 7, I had been using IIS 6 and had a lot less trouble setting it up, but I suspect that's my own fault and i am just missing something. Thanks in advance for any help

    Read the article

  • Cooling Server Closet - No A/C Is Possible

    - by JamesCo
    We're moving into a new office in an old building in London (that's England :) and are walling off a 2m x 1.3m area where the router & telephone equipment currently terminates to use as a server closet. The closet will contain: 2 24-port switches 1 router 1 VSDL modem 1 Dell desktop 1 4-bay NAS 1 HP micro-server 1 UPS Miscellaneous minor telephony boxes. There is no central A/C in the office and there never will be. We can install ducting to the outside quite easily - it's only a couple of metres to the windows, which face a courtyard. My question is whether installing an extractor fan with ducting to the window should be sufficient for cooling? Would an intake fan and intake duct (from the window, too) be required? We don't want to leave a gap in the closet door as that'll let noise out into the office. If we don't have to put a portable A/C unit into the closet, that'd be perfect. The office has about 12 people; London is temperate, average maximum in August is 31 Celsius, 25 Celsius is more typical. The same equipment runs fine in our current office (same building as new office, also no A/C) but it isn't in an enclosed space. I can see us putting say one Dell 2950 tower server into the closet, but no more than that. So, sustained power consumption in the closet would currently be about 800w (I'm guessing); possibly in the future 2kw. The closet will have a ceiling and no windows and be well-insulated. We don't care if the equipment runs hot, so long as it runs and we don't hear it.

    Read the article

  • Will Parallel-port dongle work on USB-to-Parallel Adapter?

    - by Gary M. Mugford
    We have a niche program running on a Win2K laptop that uses a security dongle connected to a parallel port for authentication. The laptop is getting creaky and I spent a frustrating night last night shopping various websites for a new laptop that had a parallel port. Seems I'm about three years late [G]. The question I have, is, if I buy a new(ish) laptop and use a USB-to-Parallel Port adapter, will the security dongle work? I know I'm not being specific about the app, but it's one most people wouldn't have heard of anyways. I've been guessing the answer to my question is no, since the app won't know to send a request out to the non-existent port. But, if the process actually is that the dongle sends a message INTO the computer every now and then, then it might work. And, I'm not sure whether the dongle is only needed at program startup time or randomly. The dongle is a 'permanent' addition to the old laptop. This is all about the money. We can have a newly-updated version of the program (which won't add any features we need) for the princely sum of $2700. Or we can spend $500 on a refurbed laptop still running WinXP, add a 30 buck adapter and keep the same solid, stolid performance we've come to appreciate. But it all comes down to the dongle behaviour. Oh, and a dock won't work. The whole laptop issue is about moving about the various nooks and crannies of the building with laptop in hand. Thanks for any suggestions/guidance. GM

    Read the article

  • nginx rewrite or internal redirection cycle

    - by gyre
    Im banging my head against a table trying to figure out what is causing redirection cycle in my nginx configuration when trying to access URL which does not exist Configuration goes as follows: server { listen 127.0.0.1:8080; server_name .somedomain.com; root /var/www/somedomain.com; access_log /var/log/nginx/somedomain.com-access.nginx.log; error_log /var/log/nginx/somedomain.com-error.nginx.log debug; location ~* \.php.$ { # Proxy all requests with an URI ending with .php* # (includes PHP, PHP3, PHP4, PHP5...) include /etc/nginx/fastcgi.conf; } # all other files location / { root /var/www/somedomain.com; try_files $uri $uri/ ; } error_page 404 /errors/404.html; location /errors/ { alias /var/www/errors/; } #this loads custom logging configuration which disables favicon error logging include /etc/nginx/drop.conf; } this domain is a simple STATIC HTML site just for some testing purposes. I'd expect that the error_page directive would kick in in response to PHP-FPM not being able to find given files as I have fastcgi_intercept_errors on; in http block and nave error_page set up, but I'm guessing the request fails even before that somewhere on internal redirects. Any help would be much appreciated.

    Read the article

  • BizTalk configuration broken following WCF hotfix installation

    - by Sir Crispalot
    I usually post over on StackOverflow, but thought this was probably better suited to ServerFault. Please migrate if I'm wrong! I am developing a WCF service and a BizTalk application on my workstation at the moment. As part of the WCF service, I had to install hotfix 971493 from Microsoft which updates some core WCF assemblies. Following installation of that hotfix, I am now experiencing severe issues in my existing BizTalk application. When I attempt to configure the properties of an existing WCF-Custom receive location, I get this error: Error loading properties (System.IO.FileLoadException) The located assembly's manifest definition does not match the assembly reference. (Exception from HRESULT: 0x80131040) If I click OK (the same error repeats four times) I eventually see the WCF-Custom properties dialog. However if I click on the various tabs, I continue to receive errors: The located assembly's manifest definition does not match the assembly reference. (Exception from HRESULT: 0x80131040) (Microsoft.BizTalk.Adapter.Wcf.Admin) The WCF-Custom receive location was working yesterday, and I installed the hotfix this morning. I'm guessing these two are related, and that BizTalk somehow has a reference to the old WCF assemblies. Does anyone know how I can fix this?

    Read the article

  • GlassFish v3: Security related updates + Repository/Publisher?

    - by chris_l
    I've used GlassFish v3.0 as my main development application server for a few weeks now. Now that I want to install it on my VPS, I'd like to get the latest security updates, because Glassfish v3 Release 3.0 (Open Source Edition or not) is already a few months old, and v3.1 is only available as "early access" nightlies (see https://glassfish.dev.java.net/public/downloadsindex.html). GlassFish offers an update mechanism (via pkg or updateTool), but when I simply try to get the latest updates (pkg image-update), it finds nothing. However, when I change the preferred publisher to dev.glassfish.org, I get a list with lots of updates. The interesting thing is, that I haven't been able to find any description about the contents of the diverse publishers/repositories (release, stable, contrib and dev) anywhere on the web, most importantly answering the question: Am I supposed to use the dev repository for security updates, or does it contain unstable updates? (The name suggests unstable updates, but the version numbers, like "3.0.1,0-11:20100331T082227Z" leave me guessing. The build is more than a week old, so it's obviously not "nightly" or "weekly", but what is it?) Where do I get security updates from then? Or are there simply no security updates yet? Asking on the GlassFish forum resulted in 56 views, but 0 answers.

    Read the article

  • Find out the type of an automounted device

    - by Steve Bennett
    I'm working on a system (Ubuntu Precise) with a mount defined in /etc/fstab as follows: /dev/vdb /mnt auto defaults,nobootwait,comment=cloudconfig 0 2 Originally I just wanted to find out if it's NFS (due to potential MySQL locking issues). Judging from man mount, it's not: If no -t option is given, or if the auto type is specified, mount will try to guess the desired type. Mount uses the blkid library for guessing the filesystem type; if that does not turn up anything that looks familiar, mount will try to read the file /etc/filesystems, or, if that does not exist, /proc/filesystems. All of the filesystem types listed there will be tried, except for those that are labeled "nodev" (e.g., devpts, proc and nfs). If /etc/filesystems ends in a line with a single * only, mount will read /proc/filesystems afterwards. But, out of curiosity now, how can I find out more about what type of device it actually is? (For context, this is a VM running on OpenStack. The device is a 60Gb allocation mounted from somewhere - but I don't know how.)` EDIT Including answers here: $ mount /dev/vdb on /mnt type ext3 (rw,_netdev) $ df -T /dev/vdb ext3 61927420 2936068 55845624 5% /mnt

    Read the article

  • What could cause these "failed to authenticate" logs other than failed login attempts (OSX)?

    - by Tom
    I've found this in the Console logs: 10/03/10 3:53:58 PM SecurityAgent[156] User info context values set for tom 10/03/10 3:53:58 PM authorizationhost[154] Failed to authenticate user (tDirStatus: -14090). 10/03/10 3:54:00 PM SecurityAgent[156] User info context values set for tom 10/03/10 3:54:00 PM authorizationhost[154] Failed to authenticate user (tDirStatus: -14090). 10/03/10 3:54:03 PM SecurityAgent[156] User info context values set for tom 10/03/10 3:54:03 PM authorizationhost[154] Failed to authenticate user (tDirStatus: -14090). There are about 11 of these "failed to authenticate" messages logged in quick succession. It looks to me like someone is sitting there trying to guess the password. However, when I tried to replicate this I get the same log messages except that this extra message appears after five attempts: 13/03/10 1:18:48 PM DirectoryService[11] Failed Authentication return is being delayed due to over five recent auth failures for username: tom. I don't want to accuse someone of trying to break into an account without being sure that they were actually trying to break in. My question is this: is it almost definitely someone guessing a password, or could the 11 "failed to authenticate" messages be caused by something else?

    Read the article

  • Connecting to MySQL Server from PHP Command Line (MAMP)

    - by Austin White
    First of all, I'm using Mac OSX 1.6, MAMP 1.9, PHP 5.3.4, and MySQL 5.1.44. I'm in the process of setting up a video encoding service for a site using Chris Boulton's PHP-Resque and Redis. Once the worker process is fired and the videos have been encoded, I need to save their locations to a mysql database. The php script is being run from the shell, so that is where the issue begins. I import the mysql settings and when it attempts to connect, I get the following errors: Warning: mysqli::mysqli(): php_network_getaddresses: getaddrinfo failed: nodename nor servname provided, or not known in /Users/austingym/Documents/Dropbox/Website/htdocs/homefree/lib/MySQLi_Extended.class.php on line 24 Warning: mysqli::mysqli(): [2002] php_network_getaddresses: getaddrinfo failed: nodename nor servn (trying to connect via tcp://MYSQL_SERVER:3306) in /Users/austingym/Documents/Dropbox/Website/htdocs/homefree/lib/MySQLi_Extended.class.php on line 24 Warning: mysqli::mysqli(): (HY000/2002): php_network_getaddresses: getaddrinfo failed: nodename nor servname provided, or not known in /Users/austingym/Documents/Dropbox/Website/htdocs/homefree/lib/MySQLi_Extended.class.php on line 24 Warning: mysqli::set_charset(): Couldn't fetch MySQLi_Extended in /Users/austingym/Documents/Dropbox/Website/htdocs/homefree/lib/MySQLi_Extended.class.php on line 32 I realize that the error is occurring because it's trying to connect to tcp://MYSQL_SERVER:3306, when MySQL is on port 8889. I've been reading about Mac OSX and MAMP errors regarding the mysql.sock and I've gone through multiple forums and tried various fixes, but none have worked. I've tried PATH=/Applications/MAMP/Library/bin/:/Applications/MAMP/bin/php5.3/bin/:/opt/local/bin:/opt/local/sbin:$PATH and sudo ln -s /Applications/MAMP/tmp/mysql/mysql.sock /tmp/mysql.sock but neither have worked. I even ran a search on my machine for "3306" to find where it's being set, but because that's the normal default, I'm guessing it's not being set explicitly. Any clues on how to fix this rather challenging error?

    Read the article

  • Kubuntu: apt-get install of php5-dev: libtool version mismatch?

    - by pinkgothic
    (Warning, clueless-newbism ahead.) Background info: I'm actually trying to install/upgrade xdebug. sudo pecl install xdebug yields: downloading xdebug-2.0.5.tgz ... Starting to download xdebug-2.0.5.tgz (289,234 bytes) ............................................................done: 289,234 bytes 67 source files, building running: phpize sh: phpize: not found ERROR: `phpize' failed A quick google tells me that phpize is a part of a package called php5-dev, so off I ran to install that. My problem is that using sudo apt-get install php5-dev fails with this output: sudo apt-get install php5-dev Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: php5-dev: Conflicts: libtool (>= 2.2) but 2.2.6a-4 is to be installed E: Broken packages 2.2.6a-4 is greater than 2.2, so I'm not sure why it's hanging itself up at that point. I'm guessing the fact that it's not entirely numeric is throwing apt-get off? I can probably install xdebug manually (though I've never done this before, so picture me with a deer clueless-newb in headlights look here, violently shaking my head and begging for a simpler solution) rather than via pecl / aptitude, but is there a way I can make aptitude install php5-dev despite the bogus 'broken package' claim? Is it even bogus, or am I misreading the error message? Alternatively: Could I install phpize in some other way (e.g. via pear or pecl)?

    Read the article

  • Kubuntu: apt-get install of php5-dev: libtool version mismatch?

    - by pinkgothic
    (Warning, clueless-newbism ahead.) Background info: I'm actually trying to install/upgrade xdebug. sudo pecl install xdebug yields: downloading xdebug-2.0.5.tgz ... Starting to download xdebug-2.0.5.tgz (289,234 bytes) ............................................................done: 289,234 bytes 67 source files, building running: phpize sh: phpize: not found ERROR: `phpize' failed A quick google tells me that phpize is a part of a package called php5-dev, so off I ran to install that. My problem is that using sudo apt-get install php5-dev fails with this output: sudo apt-get install php5-dev Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: php5-dev: Conflicts: libtool (>= 2.2) but 2.2.6a-4 is to be installed E: Broken packages 2.2.6a-4 is greater than 2.2, so I'm not sure why it's hanging itself up at that point. I'm guessing the fact that it's not entirely numeric is throwing apt-get off? I can probably install xdebug manually (though I've never done this before, so picture me with a deer clueless-newb in headlights look here, violently shaking my head and begging for a simpler solution) rather than via pecl / aptitude, but is there a way I can make aptitude install php5-dev despite the bogus 'broken package' claim? Is it even bogus, or am I misreading the error message? Alternatively: Could I install phpize in some other way (e.g. via pear or pecl)?

    Read the article

  • Server 2008 NAT Internet Not Working

    - by Jack
    I'm trying to set up Routing and Remote Access on Windows Server 2008 R2, I have a network connection that I want to share the internet from to another private network. The server has two NICs which are configured as follows: External NIC (Dynamically assigned by ISP) IP:10.175.4.150 Subnet:255.255.192.0 Gateway:10.175.0.1 DNS:10.175.0.1 Internal NIC IP:172.16.254.1 Subnet:255.255.255.0 Gateway:None DNS:None I have set the external NIC to be the public interface and enabled NAT on it in the RRAS MMC and set the internal NIC to be a private interface. I have also set up the DNS forwarding or whatever it is in the NAT section. From a client (IP:172.16.254.2) I can ping the server and access files on it, when I try to browse the web with the default gateway set to the internal NIC ip I end up getting a 404 page which is returned from the ISPs default gateway. I'm guessing it's something to do with the double NAT possibly. Trying to ping the ISPs default gateway from a private network client just times out as does accessing it directly. I've disabled and reconfigured RRAS multiple times and that doesn't seem to have made a difference, so can anyone tell me what I'm doing wrong? Thanks.

    Read the article

  • Mac OSX: which folders should ClamXav Sentry watch?

    - by trolle3000
    I'm using ClamXav on my mac. I've read this, and I am aware of the whole macs-need-no-AV-but-they-do-anyway discussion. I guess that's why I would feel like a real ass if I somehow managed to compromise my system! So ClamXav has been downloaded and ClamXav Sentry set up to start on log-in, but it doesn't really do anything before you tell it to. Specifically, you have to tell it which folders to watch for virusses/vira so I'm wondering, where are good places to look? Currently it's been set up to look the following places: In the home folder: ~/Downloads ~/Library/Caches ~/Library/Contextual Menu Items ~/Library/Cookies ~/Library/Internet Plug-Ins ~/Library/LaunchAgents In my system folder: /Library/Application Support /Library/Caches /Library/Contextual Menu Items /Library/Cookies /Library/Internet Plug-Ins /Library/LaunchAgents /Library/LaunchDaemons /Library/Startupitems Basically, this is 100% conjecture. All (most of) the folders have something to do with internet and things that start up automatically, so I'm guessing that's where vira go. But still, the qustion: Which folders should ClamXav Sentry watch, if any? FYI, I'm not using any mail app's, but please include that in your answer for anyone who might be interested. Cheers!

    Read the article

  • What do you upgrade to make games load faster? [on hold]

    - by Superbest
    Let's say you have a relatively modern game like Shogun 2. The loading screens take several minutes. This bothers you and you'd like to improve it. What is actually going on when loading screens are up? I'm guessing assets are being loaded into memory from disk, and possibly being decompressed first. However, what is actually causing the slow down? The memory? Mainboard? CPU? HDD? If you had $100 to spend on upgrades and your only goal is to speed up loading screens without reducing other performance, what component of the computer does it make sense to upgrade for maximum benefit? If your answer is "it depends on the existing setup", what sort of benchmarks would you run to determine what is causing the bottleneck? What if you had $500 instead? I give the two budgets for context. I am not asking for actual recommendations about which component to buy (nor are the numbers supposed to be rigid limits), but what features are important when shopping for components with small and large budgets (a large budget could allow buying multiple components which are not so good on their own, but work particularly well together). I mention Shogun 2 as an example, but I'm asking about reducing overall loading times, across all games, not just one game. Therefore, "put it on a solid state disk" probably won't be good solution, because putting every game on your SDD will quickly fill it up.

    Read the article

  • Strange IIS hits originating from Trend Micro

    - by TesterTurnedDeveloper
    I'm trying to trace thru an error on a extranet site I maintain. I've had a look thru the logs, and I'm seeing hits originate from these IP addresses: 216.104.15.130 216.104.15.138 216.104.15.142 216.104.15.13 150.70.84.49 150.70.84.44 Network-tools.com gives 'TREND MICRO INCORPORATED' as the owner of all these IPs. The hits fail as they aren't sending any cookies (therefore aren't considered logged in). The hits are to pages containing URLs that only the logged in user would see, i.e. ImageEdit.aspx?ImageId=467424. I.e. the server isn't guessing these URLs, someone would have to log into the site to know these URLs exist. Theory: the Trend Antivirus client grabs URLs and sends them to the server for 'extra processing'? Googling around gives me this: http://www.forumpostersunion.com/showthread.php?p=51272 - where people are reporting comment spam from these addresses. The articles says their servers have been hacked (a few months ago, presumably fixed now?). A hacked server wouldn't explain how the URLs have been plucked off the user's PCs. Has anyone seen this before? Anything nefarious going on here?

    Read the article

  • Re-sizing disk partition linux/vm

    - by Tiffany Walker
    I VM Player running a linux guest and I was wanting to know how do I expand the disk? In the VM player I gave more disk space but I am not sure how to mount/expand/connect the new disk space to the system. My old disk space was 14GB [root@localhost ~]# df -h / Filesystem Size Used Avail Use% Mounted on /dev/mapper/VolGroup-lv_root 14G 4.5G 8.2G 36% / Then I expanded it and now I see sda2 which is the new space? [root@localhost ~]# fdisk -l Disk /dev/sda: 128.8 GB, 128849018880 bytes 255 heads, 63 sectors/track, 15665 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000cd44d Device Boot Start End Blocks Id System /dev/sda1 * 1 64 512000 83 Linux Partition 1 does not end on cylinder boundary. /dev/sda2 64 2611 20458496 8e Linux LVM Disk /dev/mapper/VolGroup-lv_root: 14.5 GB, 14537457664 bytes 255 heads, 63 sectors/track, 1767 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/mapper/VolGroup-lv_swap: 6408 MB, 6408896512 bytes 255 heads, 63 sectors/track, 779 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Do I need to mount the new space first? resize2fs -p /dev/mapper/VolGroup-lv_root 108849018880 resize2fs 1.41.12 (17-May-2010) The containing partition (or device) is only 3549184 (4k) blocks. You requested a new size of 1474836480 blocks. resize2fs -p /dev/mapper/VolGroup-lv_root 128849018880 resize2fs 1.41.12 (17-May-2010) resize2fs: Invalid new size: 128849018880 [root@localhost ~]# lvextend -L+90GB /dev/mapper/VolGroup-lv_root Extending logical volume lv_root to 103.54 GiB Insufficient free space: 23040 extents needed, but only 0 available [root@localhost ~]# lvextend -L90GB /dev/mapper/VolGroup-lv_root Extending logical volume lv_root to 90.00 GiB Insufficient free space: 19574 extents needed, but only 0 available EDIT: So after trying pvcreate/vgextend nothing has so far worked. I'm guessing the new disk space added from VM Player is not showing up? pvscan PV /dev/sda2 VG VolGroup lvm2 [19.51 GiB / 0 free] Total: 1 [19.51 GiB] / in use: 1 [19.51 GiB] / in no VG: 0 [0 ]

    Read the article

  • setting up git on cygwin - openssl

    - by user23020
    I'm trying to get git running in cygwin on a windows 7 machine I have git unpacked and the directory git-1.7.1.1 when i run make install from within that directory, I get CC fast-import.o In file included from builtin.h:4, from fast-import.c:147: git-compat-util.h:136:19: iconv.h: No such file or directory git-compat-util.h:140:25: openssl/ssl.h: No such file or directory git-compat-util.h:141:25: openssl/err.h: No such file or directory In file included from builtin.h:6, from fast-import.c:147: cache.h:9:21: openssl/sha.h: No such file or directory In file included from fast-import.c:156: csum-file.h:10: error: parse error before "SHA_CTX" csum-file.h:10: warning: no semicolon at end of struct or union csum-file.h:15: error: 'crc32' redeclared as different kind of symbol /usr/include/zlib.h:1285: error: previous declaration of 'crc32' was here csum-file.h:15: error: 'crc32' redeclared as different kind of symbol /usr/include/zlib.h:1285: error: previous declaration of 'crc32' was here csum-file.h:17: error: parse error before '}' token fast-import.c: In function `store_object': fast-import.c:995: error: `SHA_CTX' undeclared (first use in this function) fast-import.c:995: error: (Each undeclared identifier is reported only once fast-import.c:995: error: for each function it appears in.) fast-import.c:995: error: parse error before "c" fast-import.c:1000: warning: implicit declaration of function `SHA1_Init' fast-import.c:1000: error: `c' undeclared (first use in this function) fast-import.c:1001: warning: implicit declaration of function `SHA1_Update' fast-import.c:1003: warning: implicit declaration of function `SHA1_Final' fast-import.c: At top level: fast-import.c:1118: error: parse error before "SHA_CTX" fast-import.c: In function `truncate_pack': fast-import.c:1120: error: `to' undeclared (first use in this function) fast-import.c:1126: error: dereferencing pointer to incomplete type fast-import.c:1127: error: dereferencing pointer to incomplete type fast-import.c:1128: error: dereferencing pointer to incomplete type fast-import.c:1128: error: `ctx' undeclared (first use in this function) fast-import.c: In function `stream_blob': fast-import.c:1140: error: `SHA_CTX' undeclared (first use in this function) fast-import.c:1140: error: parse error before "c" fast-import.c:1154: error: `pack_file_ctx' undeclared (first use in this functio n) fast-import.c:1154: error: dereferencing pointer to incomplete type fast-import.c:1160: error: `c' undeclared (first use in this function) make: *** [fast-import.o] Error 1 I'm guessing that most of these errors are due to the iconv.h and openssl files which apparently are missing, but I can't figure out how I'm supposed to install those (if I am), or if there is some other way to get around this.

    Read the article

  • permanent NAS-mount in Ubuntu - wrong fs type, bad option, bad superblock

    - by Emil
    My network drive shows up in the file browser, just like my external usb-harddrive. Moving, running and editing files works. Hovering over it shows smb://lacie-2big/nasdisk . BUT, when I want to save a file, the drive doesn't come up as an option. All I can see is my other places, including my usb-harddrive. I am a complete newbie but I am GUESSING that it has something to do with the mount not being a "real" mount but just a shortcut to the smb location. So I ran the tutorial at https://wiki.ubuntu.com/MountWindowsSharesPermanently about how to "mount a network drive permanently". edited my fstab to //LaCie-2big/nasdisk /media/nasmount cifs guest,uid=1000,iocharset=utf8,codepage=unicode,unicode 0 0 and running sudo mount -a gave me the following error: mount: wrong fs type, bad option, bad superblock on //LaCie-2big/nasdisk, missing codepage or helper program, or other error (for several filesystems (e.g. nfs, cifs) you might need a /sbin/mount. helper program) In some cases useful info is found in syslog - try dmesg | tail or so Now thats a very helpful error message, BUT, before I go any further, I'd be really thankful if one of you could tell me if I'm even in the right ballpark, or if my actual need: to be able to download files (ie torrents) directly to the drive, can be possible as it is already. Question: How to fix "wrong fs type, bad option, bad superblock on //LaCie-2big/nasdisk, missing codepage or helper program" when running mount -a

    Read the article

  • NX Client for Windows 7 Opens Remote Desktop in Multiple Windows

    - by Corey Kennedy
    What I'm trying to do: access my Ubuntu desktop remotely via NX Client on my Windows 7 laptop. My environment: server: Ubuntu 10.10 on AMD 1Ghz/512MB RAM PC client: Windows 7 on ThinkPad sl510 Software: server is running NXServer 3.4.0. Using xfce4 window manager. Laptop is using NXClient for Windows In my NX Client "Desktop" settings I've selected "Unix" and "Custom" for OS and environment. I've also specified "startxfce4" as the application to launch when NX connects. I am able to authenticate an NX session on my laptop. By this I mean, I can start the client on my laptop, enter credentials for my Linux user, and NX establishes a connection to the server and attempts to open a remote desktop window. The problem, though, is that this remote desktop is "fragmented" into many Windows. One window will display the bulk of my desktop (complete with desktop icons for "Home," "File System," and "Trash") while another window will contain the taskbar, and another window will contain the application strip. I can select each of these Windows individually, but I cannot click on any objects within them. I've searched Super User, Ubuntu Forums, NX help, Server Fault, and tried many Google searches - none have turned up another case of this particular problem. I'm stumped. Does anyone have any suggestions for what I might try? I'm guessing the problem has to do with my xfce config files, but I've only just setup this server - it's been a long time since I've used Linux and there's a lot I just don't know. What I am NOT trying to do: use Desktop sharing from Ubuntu, whereby I VNC into a desktop that I've already established on the server. I am trying to configure this Linux box as a headless server that I can stash someplace out-of-the-way in my house, then interact with through my laptop. I don't want to have a monitor or keyboard connected to the Linux box. Thanks for your help!

    Read the article

  • SQL server agent job to execute SSIS package fails, package succeds if run manually

    - by growse
    I've got a SSIS package installed on a SQL server (SQL Server 2012). It's fairly simple and just fetches data from a remote data source and adds it into a local table. The remote connection string is using SQL server authentication, while the local connection is using Windows auth. The remote connection password is protected, and the package was imported setting the protection level to Rely on server storage and roles for access control. If I run the SSIS package manually, it works. If I run it from the command line using dtexec, it works. If I use runas to switch to the domain account that the SQL server agent is running under, and then run the package using dtexec, it works. If I create a SQL Agent job with a single step to run the package, it fails, providing very little detail as to what's going on. I'm guessing it's not able to get the password to log into the remote SQL server, because it fails very quickly. Also, if I tick 'log to table' and view the resulting file, I get the following: Description: ADO NET Source has failed to acquire the connection {0D8F2CD4-A763-4AEB-8B52-B8FAE0621ED3} with the following error message: "Login failed for user 'username'.". If I try to add the password in the connection string manually under data sources in the job step dialog, it refuses to save it, always seeming to remove the 'password' bit of the connection string. I thought that SQL server agent jobs always ran under the context of the account which the SQL server agent is running under. This account is a sysadmin on the local SQL server, and the package works using dtexec under that account, so why would it fail when trying to run as an agent job?

    Read the article

  • Using modem for sending voice recording

    - by ircmaxell
    I've got an interesting one for you. I've been going over my server monitoring and notification systems (Nagios based), and realized that if our internet connection goes down, there's no way for it to notify me. I already have a modem listening (Via CentOS 5) on a spare POTS line so that I can dial-in in case our internet goes down. I was wondering if I could come up with a script (Shell, Python, etc) that can dial out and play a recorded message (wave file I'm guessing) when it's picked up. I know Windows supports voice calls over a voice modem, I was wondering if a solution existed for Linux... I know asterisk can probably do it, but isn't that overkill (A full blown VOIP system just for a notification mechanism that will hopefully never be used)? And wouldn't it interfere with the modem's primary function as a backup network interface (PPP spawned via mgetty)? I've done some searching, and haven't really come up with much. I know how to dial out from the command line, but only as a modem (not as voice). Worst case, I could set it up to dial out as a modem, and then just realize that if I get a call with modem sounds from that number that it's the notification... Any insight would be appreciated...

    Read the article

  • Proper High-End Video Editing Software for Windows?

    - by Michael Stum
    I'm wondering if anyone here has experience with Prosumer/High-End Video Editing? So far I use Adobe Premiere CS4, but it is an unstable and buggy mess sadly. This could be partially caused by the fact that my input material isn't always pure HDV/AVCHD but sometimes it can be DivX or some already pre-processed video. The one thing I liked about Adobe is though that with After Effects and Encore, they have a good overall toolset, but if it's sub-par then it's no good. Luckily it is already paid for and was worth it's money overall, so It's not a complete waste of money. But are there alternatives? Especially After Effects is quite unique, and the closest thing I found is Apple's Final Cut Studio which includes Motion and DVD Studio. The two downsides is that it requires a Mac and that it doesn't support BluRay though. Any hints for Windows? Sony Vegas might be something I'll look at, but I'm guessing I have to keep using After Effects for some serious compositing/VFX?

    Read the article

  • Asus MyCinema U3100mini Choppy

    - by dsimcha
    I'm running an Asus MyCinema U3100mini ATSC on Windows 7 64-bit. When I play live TV in Windows Media Center, it's very choppy and uses 500+ MB of RAM, I'm guessing due to the hard drive buffering functionality. Is there any way to disable the live TV pause buffer completely? If not, can anyone recommend alternative software that: Works with the MyCinema. Is lightweight and not horribly bloated with features I'll never use like Windows Media Center is. Edits: This is a dual boot system. I've discovered that the tuner actually works fine on XP. It also works fine on my other computer, which has slower hardware and also runs Windows 7 64-bit. The problem actually seems to be with playback at large screen sizes, not with hard drive buffering. Everything works fine below a certain window size and fails for large windows or full screens. Also, the same thing seems to happen whether playing live or recorded TV. As far as the obvious stuff goes, I have the latest video drivers from ATI for my Radeon x1050.

    Read the article

< Previous Page | 39 40 41 42 43 44 45 46 47 48 49 50  | Next Page >