Search Results

Search found 19539 results on 782 pages for 'pretty print'.

Page 561/782 | < Previous Page | 557 558 559 560 561 562 563 564 565 566 567 568  | Next Page >

  • What's up with stat on Macos/Darwin? Or filesystems without names...

    - by Charles Stewart
    In response to a question I asked on SO, Give the mount point of a path, one respondant suggested using stat to get the device name associated with the volume of a given path. This works nicely on Linux, but gives crazy results on Macos 10.4. For my system, df and mount give: cas cas$ df Filesystem 512-blocks Used Avail Capacity Mounted on /dev/disk0s3 58342896 49924456 7906440 86% / devfs 194 194 0 100% /dev fdesc 2 2 0 100% /dev 1024 1024 0 100% /.vol automount -nsl [166] 0 0 0 100% /Network automount -fstab [170] 0 0 0 100% /automount/Servers automount -static [170] 0 0 0 100% /automount/static /dev/disk2s1 163577856 23225520 140352336 14% /Volumes/Snapshot /dev/disk2s2 409404102 5745938 383187960 1% /Volumes/Sparse cas cas$ mount /dev/disk0s3 on / (local, journaled) devfs on /dev (local) fdesc on /dev (union) on /.vol automount -nsl [166] on /Network (automounted) automount -fstab [170] on /automount/Servers (automounted) automount -static [170] on /automount/static (automounted) /dev/disk2s1 on /Volumes/Snapshot (local, nodev, nosuid, journaled) /dev/disk2s2 on /Volumes/Sparse (asynchronous, local, nodev, nosuid) Trying to get the devices from the mount points, though: cas cas$ df | grep -e/ | awk '{print $NF}' | while read line; do echo $line $(stat -f"%Sdr" $line); done / disk0s3r /dev ???r /dev ???r /.vol ???r /Network ???r /automount/Servers ???r /automount/static ???r /Volumes/Snapshot disk2s1r /Volumes/Sparse disk2s2r Here, I'm feeding each of the mount points scraped from df to stat, outputing the results of the "%Sdr" format string, which is supposed to be the device name: Cf. stat(1) man page: The special output specifier S may be used to indicate that the output, if applicable, should be in string format. May be used in combination with: ... dr Display actual device name. What's going on? Is it a bug in stat, or some Darwin VFS weirdness? Postscript Per Andrew McGregor, try passing "%Sd" to stat for more weirdness. It lists some apparently arbitrary subset of files from CWD...

    Read the article

  • Font for Wine that supports the entire character set of the Win32 Console?

    - by Brian Campbell
    I would like to be able to display in the Wine console all characters that the Win32 console can display. I've written a small test program to print out all 8-bit characters: #include <stdio.h> int main(int argc, char *argv[]) { int i, j; for (i = 0; i <= 0xF0; i+=0x10) { for (j = i; j <= i + 0x0F; ++j) printf("%2x:%c", j, (char)j); printf("\n"); } getchar(); return 0; } Under Wine, the best I can do so far is using Andale Mono: While this is what I see on Windows Server 2008: Is there anywhere I can legally download a font that will allow me to view all of those characters under Wine? edit I've found a set of DOS fonts that includes a CP437 font, which should cover the character set I'm interested in. However, even if I install this font, wineconsole doesn't seem to recognize it. Is there any way I can get wineconsole to use this font, or convert this font to a format that wineconsole can use? Or is there any way I can extract fonts from DOSEMU for use in Wine? Oh, and I should probably mention that I'm on Mac OS X 10.6.2, installing Wine via MacPorts, using the wine-devel package. more information I have tried installing some console fonts that should cover the full character set as Mac OS X fonts (such as the NewDOS font listed above, and a font I tried converting from the fonts supplied by DOSEMU). Wine does not seem to pick up on new fonts installed in Mac OS X. Is there a way to register new fonts I've installed with Wine? Would manually editing the system.reg file that seems to contain font mappings work, or is there something else I'd need to do? bump Bounty ends soon, I'm still looking for an answer for this. Does anyone use the Wine console for complex text user interfaces?

    Read the article

  • What's up with stat on Mac OS X/Darwin? Or filesystems without names...

    - by Charles Stewart
    In response to a question I asked on SO, Give the mount point of a path, one respondant suggested using stat to get the device name associated with the volume of a given path. This works nicely on Linux, but gives crazy results on Mac OS X 10.4. For my system, df and mount give: cas cas$ df Filesystem 512-blocks Used Avail Capacity Mounted on /dev/disk0s3 58342896 49924456 7906440 86% / devfs 194 194 0 100% /dev fdesc 2 2 0 100% /dev <volfs> 1024 1024 0 100% /.vol automount -nsl [166] 0 0 0 100% /Network automount -fstab [170] 0 0 0 100% /automount/Servers automount -static [170] 0 0 0 100% /automount/static /dev/disk2s1 163577856 23225520 140352336 14% /Volumes/Snapshot /dev/disk2s2 409404102 5745938 383187960 1% /Volumes/Sparse cas cas$ mount /dev/disk0s3 on / (local, journaled) devfs on /dev (local) fdesc on /dev (union) <volfs> on /.vol automount -nsl [166] on /Network (automounted) automount -fstab [170] on /automount/Servers (automounted) automount -static [170] on /automount/static (automounted) /dev/disk2s1 on /Volumes/Snapshot (local, nodev, nosuid, journaled) /dev/disk2s2 on /Volumes/Sparse (asynchronous, local, nodev, nosuid) Trying to get the devices from the mount points, though: cas cas$ df | grep -e/ | awk '{print $NF}' | while read line; do echo $line $(stat -f"%Sdr" $line); done / disk0s3r /dev ???r /dev ???r /.vol ???r /Network ???r /automount/Servers ???r /automount/static ???r /Volumes/Snapshot disk2s1r /Volumes/Sparse disk2s2r Here, I'm feeding each of the mount points scraped from df to stat, outputting the results of the "%Sdr" format string, which is supposed to be the device name: Cf. stat(1) man page: The special output specifier S may be used to indicate that the output, if applicable, should be in string format. May be used in combination with: ... dr Display actual device name. What's going on? Is it a bug in stat, or some Darwin VFS weirdness? Postscript Per Andrew McGregor, try passing "%Sd" to stat for more weirdness. It lists some apparently arbitrary subset of files from CWD...

    Read the article

  • Why can't I see installed AIR applications in /Applications on Mac OS Lion?

    - by Damir Zekic
    Yesterday I did a clean install of Mac OS Lion, and installed bunch of apps, including HipChat Desktop Application (an Adobe AIR app). I used it for some time and eventually closed it. Today I wanted to start it again, but couldn't find it in Launchpad. I looked in /Applications folder, but I couldn't find it there either (I still don't have ~/Applications). I went back to HipChat website (from where I installed) and a Flash plugin allowed me to run the app immediately. It shows up in my Dock and if I "keep it" there, I can launch it again (it still doesn't appear in my /Applications dir). However, it asks me for confirmation about launching an app that has been downloaded from the Internet (every time I start it, it's annoying). I also see the app in Adobe AIR uninstaller. I then went to my old Snow Leopard installation and found HipChat in my /Applications folder there. Copied it to the Lion disk and it works. Now I have two HipChat entries in Adobe AIR uninstaller. I guess I solved the issue at hand, but I still don't understand where the original app is located and how I can access it and move it to Applications. I couldn't find it anywhere with both Finder search nor find command in terminal (I used $find / -name "HipChat*" -print). So, how does Adobe AIR store the installed applications (on OSX Lion) and is there any way to get them to show up in Launchpad?

    Read the article

  • Computer Comparison - which is "better"

    - by David Murdoch
    A company I work with recently replaced their old server and gave it to me. Their old server is a Dell PowerEdge 2600. I've been playing with the machine and even installed Windows Server 2008 on it...and it seems to run it pretty well. Here are the specs for the two machines: Dev Machine: AMD Athlon64 3000+ 2.38 GHz (overclocked from 1.8GHz [@ 280x8.5] - it is stable-ish) Memory (RAM): 1x1GB OCZ PC3200 (Dual-Channel) 300GB HD OS: Windows XP Pro (32bit) SuperPi 1M digit test: 40 seconds Dell PowerEdge 2600 Server: Intel Xeon CPU 2.8GHz 2.8GHz Memory (RAM): 512MBx2 (PC2700, not dual channel) 68GB HD (RAID 5) OS: Windows Server 2000 (32bit) SuperPi 1M digit test: 56 seconds [using 1 processor] (Themes and Aero-Flass UI turned off, of course) I use my computer to regularly run Photoshop CS5, Illustrator CS5, Flash CS5, 5 browsers (Chrome, FF, IE, Safari, Opera), iTunes, Visual Studio 2010, and Kaspersky Internet Security 2010 [sometimes simultaneously :-) ]. The SuperPi test has my dev machine coming in about 30% faster than the Server machine...though this could be due to the server running "Vista" with background processes prioritized. Do you think it would be realistic/advantageous for me to move from my dev machine to the Dell PowerEdge 2600? Is it possible to install additional DVD drives/burners on the server? Can I install my internal 300 GB hard drive on the server? Can I add some USB 2.0 ports? Note: I'll probably install Win XP Pro on the dev machine if I do switch. If not, are there any creative and useful way for me to take advantage of this server (with the goal of faster computing)?

    Read the article

  • Set up server at home for running svn and Bugzilla

    - by erikric
    Hi, I'm a fairly experienced developer, but when it comes to servers and network related stuff, I'm pretty green. We are developing a web site, and I would like to set up a server that can host my SubVersion repository, and also host Bugzilla for when we release a testversion on some users. So what do i need to accomplish this? I have an old computer that can be used. Can I run this on any OS? It currently has Win 7 installed, but I was thinking about going for Ubuntu instead. Any reason to go for one or the other? I guess I need a web server, and I guess Apache will do fine. Do I need something else to let my computer be available from all over the web, or is a web server and a standard internet connection all it takes? A link some good online tutorials would be much appreciated. And then I mean for real dummies ;). I usually find pages that try to explain setting up servers going way over my head.

    Read the article

  • Using IIS7 as a reverse proxy

    - by Jon
    My question is pretty much identical to the question listed but they did not get an answer as they ended up using Linux as the reverse proxy. http://serverfault.com/questions/55309/using-iis7-as-a-reverse-proxy I need to have IIS the main site and linux (Apache) being the proxied site(s). so I have site1.com (IIS7) site2.com (Linux Apache) they have subdomains of sub1.site1.com sub2.site1.com sub3.site2.com I want all traffic to go to site1.com and to say anything that is site2.com should be proxied to linux box on internal network, (believe ARR can do this but not sure how). I can not have it running as Apache doing the proxying as I need IIS exposed directly. any and all advice would be great. EDIT I think this might help me: <rule name="Canonical Host Name" stopProcessing="true"> <match url="(.*)" /> <conditions> <add input="{HTTP_HOST}" negate="true" pattern="^cto\.com$" /> <add input="{HTTP_HOST}" negate="true" pattern="^antoniochagoury\.com$" /> <add input="{HTTP_HOST}" negate="true" pattern="www.antoniochagoury\.com$" /> </conditions> <action type="Redirect" url="http://www.cto20.com/{R:1}" redirectType="Permanent" /> </rule> from: http://www.cto20.com/post/Tips-Tricks-3-URL-Rewriting-Rules-Everyone-Should-Use.aspx I will have a look at this when I have access to the IIS7 box. Thanks

    Read the article

  • Free web-based software for team collaboration/documentation

    - by Jason Antman
    Looking for some advice here, as my search has turned up to be pretty fruitless. My group (9 people - SAs, programmers, and two network guys) is looking for some sort of web tool to... ahem... "facilitate increased collaboration" (we didn't use a buzzword generator, I swear). At the moment, we have an unified ticketing system that's braindead, but is here to stay for political/logistical reasons. We've got 2 wikis ("old" and "new"), neither of which fulfill our needs, and are therefore not used very often. We're looking for a free (as in both cost and open source) web-based tool. Management side: Wants to be able to track project status, who's doing what, whether deadlines are being met, etc. Doesn't want full-fledged "project management" app, just something where we can update "yeah this was done" or "waiting for Bob to configure the widgets". TeamBox (www.teambox.com) was suggested, but it seems almost too gimmicky, and doesn't meet any of the other requirements: Non-management side: - flexible, powerful wiki for all documentation (i.e. includes good tables, easy markup, syntax highlighting, etc.) - good full text search of everything (i.e. type in a hostname and get every instance anyone ever uttered that name) - task lists or ToDo lists, hopefully about to be grouped into a number of "projects" - file uploads - RSS or Atom feeds, email alerts of updates We're open to doing some customizations (adding some features, notification/feeds, searching, SVN integration, etc.) but need something F/OSS that will run under Apache. My conundrum is that most of the choices I've found so far fall into one of these categories: project management/task tracking with poor wiki/documentation/knowledge base support wiki with no task tracking support ticketing system with everything else bolted on (we already have one that we're stuck with) code-centric application (we do little "development", mostly SA work) Any suggestions? Or, lacking that, any comments on which software would be easiest to add the lacking features to (hopefully ending up with something that actually looks good and works well)?

    Read the article

  • DNS something is wrong?

    - by Nickolas R.
    Hello I am configuring bind9 on a server with two network interfaces, one is connected to the LAN and the other is connected to the Internet through NAT so bind is not faced directly to the Internet. Everything seems to work fine, clients can do both forward and reverse lookups but somethings seems strange. On the server if i try to ping www.google.com one time, a great amount of network activity is genereated, alot more that one would expect so i decided to sniff the traffic with tcpdump. When loading the dump into Wireshark i can see about 250 entries with "Standard query A" and "Standard query response" Here a some of the entries from the dump DNS Standard query A www.google.com DNS Standard query A blackhole-1.iana.org DNS Standard query A blackhole-2.iana.org DNS Standard query response DNS Standard query A ns2.isc-sns.com DNS Standard query A ns1.isc-sns.net DNS Standard query A ns3.isc-sns.info DNS Standard query response PTR b.iana-servers.net RRSIG DNS Standard query A auth2.dns.cogentco.com DNS Standard query A ns1.crsnic.net DNS Standard query A ns2.nsiregistry.net DNS Standard query A ns3.verisign-grs.net DNS Standard query A ns4.verisign-grs.net DNS Standard query PTR 79.52.19.199.in-addr.arpa I do not have too much experince with DNS yet, but i am pretty sure that something is wrong. Anybody that have an idea of whats is going on?

    Read the article

  • Robocopy launches and then hangs/just sits there

    - by NateO
    I'm setting up an archive process to store old files on an external hard drive. The computer in question is running Windows 7 Pro 32bit. We have a server folder with 150,000+ files in it, most of which are pretty small (below 200k). I'm trying to use robocopy in a batch file to do this. It was working fine the other day, now all it does upon launch is sit there. It shows me all the options and whatnot, and also lists the number of files in the directory and the directory itself, but it never gets past that line. If I switch the destination to the local C drive, it eventually starts copying files. Is there something in my batch file that needs to change? Or could there be a problem with the external Western Digital drive that I'm using? The WD drive currently is holding about 175,000 files. Here is the one line batch file I have: robocopy "\\cgifp01\Prepress\Public\ImportedPDF" "E:\OldFiles" *.* /R:2 /W:10 /MINAGE:15 /MOV /B /XJ /XF "blank_test.pdf" Thanks for any tips or ideas. Nate

    Read the article

  • No digital audio output with Asus Xonar DG

    - by Lunatik
    I've purchased an Asus Xonar DG as replacement for faulty onboard audio in a Medion 8822 as it has an optical output which is all I really need to feed my HTPC. I uninstalled the previous drivers/devices, switched the PC off, inserted the Asus card, powered up, disabled the onboard audio in the BIOS, then installed the driver that came on the CD (same version as on Asus' website as of today) and everything went perfectly - no errors. I set the audio devices up in Windows and in the Asus utility (SPDIF enabled, 6-ch audio) as I would expect to see them work, but the only thing is I have no digital audio from test tones within Windows/the Asus utility, PCM audio or Dolby Digital from DVD. Analogue audio is fine. I've uninstalled things and reinstalled a couple of times now, as well as trying almost all combinations of analogue/digital outputs but can't get it sorted. Does anyone have any tips on how to get this working? This card has just been released so there isn't much out there to go on. Notes: The light on the toslink port is lit. OS is Vista 32-bit SP2 and all up to date, pretty much a fresh install with almost no 3rd party applications installed This page seems to suggest that a digital output device in Windows is not needed with Xonar cards as it was with the previous Realtek so I have it set to Analog. The only other output device is S/PDIF pass-thru

    Read the article

  • Installation of Active Directory on separate VM from DNS does not entierly work - not sure why

    - by René Kåbis
    Not sure what I am doing wrong here. I have a moderately midrange server (16 cores, 2Ghz, 32GB ECC REG RAM, 6TB storage, nothing too extreme) where I am running Hyper-V (Server 2012 R2 Enterprise) in order to provision virtual machines. So why an AD separate from DNS? I want redundancy. I want to be able to move VMs and back them up individually and not have too many services on any one VM. I have already provisioned a VM with DNS, and have set it up right -- essentially, I have: Set up Static IP’s for everyone involved. Installed the DNS service on the DNS VM. Created a forward lookup zone and a reverse lookup zone (primary zone) xyz.ca Configured the zones to use nonsecure and secure dynamic updates (i will change this to secure later after the domain controller is online). Created a A record for the DC in the forward lookup zone (and a reverse ptr) Changed DC’s DNS server (network settings) to the new DNS server. Checked that I can ping the dns server from the new DC by hostname. When I went ahead and did a DCpromo on the DC, and un-cheked the “install DNS” option, everything seemed to go well (no error messages), but I saw no changes on the DNS server whatsoever (no additional settings). Plus, the DNS server seems to be unable to join the domain, as it claims that the domain is not discoverable. As a final note, I do run Symantec Endpoint Protection, which includes a firewall and most settings set as default. I have not yet tried turning this off, but my experience has been that if a service would open up a port on a Windows firewall, it would do the same through Symantec. There is pretty tight integration these days with corporate-class AV and Windows. I have a template vhdx fully set up (just short of any special roles and features) that I can use to replace the current AD VM with, so doing this all over again is not too much skin off of my nose.

    Read the article

  • what web based tool, to allow a non-technical user to manage authorized keys files on a Linux (fedora/centos/ubuntu/debian) server

    - by Tom H
    (Edit: clarification below) We have a number of groups of developers that change frequently, and a security policy to require individual logins to servers using rsa or dsa public keys, which is achieved via the standard method of adding id_dsa.pub to their authorized keys file. I am using chef to sync the user accounts across machines, however our previous method of using webmin to manage the user passwords is not designed for key based auth, and hence is not easy to use for non-technical users. The developers are logging in from the WAN using ssh, they can either provide their own key, or an administrator will send them a private key. The development machines are located in the cloud and we have a single server available to host the master set of accounts. Obviously I could deploy ldap or other centralised authentication system, but that seems a bit over blown when webmin worked well for the simple case. It is easy to achieve synchronised users, groups and passwords across a bunch of low security development boxes using webmin clustered users and groups. However looking at the currently installed webmin it is not so easy to create the authorized keys as it is to create user accounts and passwords. (its possible, but its not easy - some functionality is in the usermin module, or would required some tedious steps) Ideally I'd like a web interface that is pretty much dedicated to creating users and groups, and can generate key pairs on the fly, and can accepted pasted in public keys to add to the users authorized keys file. If the tool sync'ed the users and keys as well, that would be great, but I can use chef to do that part if the accounts are created correctly on the "master" server.

    Read the article

  • Handle Sysinternals software does not accept -c parameter

    - by Alex
    I am trying to close a handle to a locked file in Windows, using Sysinternals Handle software (http://technet.microsoft.com/en-us/sysinternals/bb896655). First I search for opened handle: handle.exe "C:\Temp" It issues me the following: Far.exe pid: 1144 type: File 2E8: C:\Temp Far.exe pid: 1144 type: File 3A8: C:\Temp Next I run handle.exe with -c parameter. However, whichever number I enter, it does not do anything. I have tried both: 1144, 2E8, 3A8 and 1144 in hex (478) as the software help says it accepts PID in hexademic. Whatever I enter, it just issues the following: Handle v3.46 Copyright (C) 1997-2011 Mark Russinovich Sysinternals - www.sysinternals.com usage: handle [[-a [-l]] [-u] | [-c <handle> [-y]] | [-s]] [-p <process>|<pid>] [name] -a Dump all handle information. -l Just show pagefile-backed section handles. -c Closes the specified handle (interpreted as a hexadecimal number). You must specify the process by its PID. WARNING: Closing handles can cause application or system instability. -y Don't prompt for close handle confirmation. -s Print count of each type of handle open. -u Show the owning user name when searching for handles. -p Dump handles belonging to process (partial name accepted). name Search for handles to objects with <name> (fragment accepted). No arguments will dump all file references. What am I doing wrong?

    Read the article

  • Backup data rate on Raspberry Pi maxing out at 5 Mb/s. Why?

    - by bastibe
    I set up my Raspberry Pi as a Time Machine, as documented here. At the moment, the Raspberry Pi is connected to my MacBook Pro using a direct Ethernet cable. Also, an external hard drive (laptop drive) is connected to the Raspberry Pi using the USB port. However, backups are pretty slow. Activity Monitor claims that the Network is transferring a very steady 5 Mb/s, where my Time Capsule is transferring up to 8 Mb/s with a lot of fluctuation. The Raspberry Pi self-reports (top) that its CPU is only half-used, with about equal parts afpd, usb-storage and jbd2/sda1-8. Thus, I think that the processing power of the Raspberry Pi does not seem to be the problem here. To me, this looks like there is some kind of bottleneck that maxes out at 5 Mb/s thus potentially having my backups run at less than their potential speed. To the best of my knowledge, this might be the afp-daemon, the usb-bus or the external hard drive. So, my question is, how could I identify the true culprit and what can I do about it?

    Read the article

  • Setting up DNS using VirtualMin/WebMin

    - by Nyxynyx
    I am moving from a cPanel server to one where I've installed VirtualMin. The LAMP stack and the website files have been setup properly and I can access the website by its IP address. Problem: Now its time to point my domain mydomain.com to my new server. After reading many sites describing setting up bind and master zones, I am pretty confused as to what to do, especially coming from a cPanel server where its really simple to set this up. Attempt Tried to register my nameservers ns1.mydomain.com and ns2.mydomain.com at my domain registrar, but I am missing the IPs I need to point these nameservers to. Should I set ns1.mydomain.com to the IP addres of my web server, and not register ns2.mydomain.com? When specifying the DNS for mydomain.com, the first one I've set it to ns1.apadment.com. On the manager/admin page of my webhost provider, I am given the option to create a secondary slave DNS, which I assigned to the IP address of my server. Though I am not sure how the slave DNS will copy the info from my web server? I have assigned this secondary DNS ns.hostprovider.com as the second DNS for mydomain.com I tried creating a Virtual Server under Virtualmin, but it seems to mess up Apache's DocumentRoot for the site by creating and enabling a new vhost file that ends with .conf. I edited the .conf file to point DocumentRoot back to where its supposed to be /var/www/mydomain instead of /user/mydomain.com I believe the next step is to setup the zone. Virtualmin has already created a Master Zone with 8 different addresses (www.mydomain.com, ftp.mydomain.com...). Under Nameservers, there are already 2 records. One is the hostname (random name given by hostprovider, ns12345.ip123-123.net), the other is the secondary slave DNS provided by the host provider. Does having BIND running on my web server makes the server the master DNS? Thank you!

    Read the article

  • Debugging Samba/CUPS printer sharing with Windows

    - by mrdrbob
    I've got a HP Deskjet hooked up to a Slackware 12.2 box. I've got CUPS set up and can print a test page from the box just fine. I've also got Samba set up and have a couple file shares that work fine. I'm trying to share that HP Deskjet out via Samba, but I can't get it to show up in any Windows system. I see the server and its file shares in Windows networking, but when I open the Printers, no printer shows up. Running net view \\servername from the command line lists the file shares, but no printers. Here's the pertinent part of my smb.conf, if that helps: [global] workgroup = HOMENET security = share hosts allow = 192.168.1. 192.168.2. 127. load printers = yes printcap name = cups printing = cups log file = /var/log/samba.%m max log size = 50 [printers] comment = All Printers path = /var/spool/samba browseable = no public = yes writable = no printable = yes guest only = yes Can anyone give me some pointers as to where to start looking for potential causes? Update: Running testparm shows no errors. Here's the output (minus the file shares): [global] workgroup = HOMENET security = SHARE log file = /var/log/samba.%m max log size = 50 printcap name = cups hosts allow = 192.168.1., 192.168.2., 127. [printers] comment = All Printers path = /var/spool/samba guest only = Yes guest ok = Yes printable = Yes browseable = No

    Read the article

  • Why a 10 years old software still is so slow even today?

    - by Cawas
    I just noted this question due to a game (which happens to be Diablo 2), but the matter of fact is: why is my brand new mac book pro, made in 2009 with latest technology (tho it's the cheapest one) can't rival my computer which used to run this much faster back in 2000? Really, it was much faster on my AMD K6 450 back in those days, and I could even run two clients at same time with no slow down. I've always had the feeling this machine was slow, but this is a very odd way to attest it. Granted, the machine is smaller, runs on wifi and "boots" way faster thanks to sleep mode. But other than that, what have we evolved after all?! I'm pretty sure this shouldn't be graphical card's fault. Sure if I buy latest technology it will run fast, and probably most people here can confirm this and won't even understand my question. But the thing is, all the hardware is supposedly much faster and better than the stuff from 10 years ago. The software and operating system became more complex, but also more well refined. Now I'm trying a piece of software that is actually 10 years old and it's not getting any better results! Why?

    Read the article

  • How do you automatically close 3rd party applications when LiberKey is shut down?

    - by NoCatharsis
    Within LiberKey, I have added my own portable applications that are not included within the LiberKey library. When you go into the Properties menu for the app in the LiberKey UI, the Advanced tab has an option for Autoexecute. This dropdown menu seems to have no visible effect, at least on my current installation. I found that I could right click within the primary GUI and select "Add software group", add all 3rd party applications, then go to the Advanced tab within THAT Properties screen and select Autoexecute - "Always on startup". This solved the problem for starting the apps when LiberKey starts. However, now I'm having the same issue when closing out LiberKey. I have created a new 3rd party app that calls the same .exe, but sends the Parameter "/close". I then went to the Advanced tab and selected Autoexecute - "Always on shutdown". Seems pretty logical right? But the apps will not close on LiberKey shutdown. I cannot handle the app close-outs in the same way with a software group, as I did with the startup issue because the Autoexecute drop-down does not have an "Always on shutdown" option. Unfortunately, many of the Q&A forums on liberkey.com are in French and I took Spanish in high school. Otherwise I've not been able to find a workable answer. Any suggestions?

    Read the article

  • VMWare Newbie - looking for hardware recommendations and help :) [closed]

    - by Dan
    I am looking for some hardware recommendations on an upcoming virtualization project. We are a small company (80 users - 25 in site 1, 55 in site 2) currently using Windows Server 2003 - no VM servers yet. Our AD is setup where site 1 is the root domain and site 2 is a subdomain/subnet - connected by T1 and VPN for failover. The current DC's also server as file servers, print servers, AntiVirus servers. Email is in the cloud. Additionally then in site 1 we have 3 additional member servers - one running IBM Websphere for a customer specific app, one running Infor PowerLink (no real heavy load) and another that we use for Virtual Studio apps and also runs DirSync for Exchange Online. No heavy workloads on any of these machines really. We also have an AS400 box that we run ERP/CRM software on that site 2 connects to over the WAN link. In site 2 we also have a SQL machine that runs on Win2K server. Database files are not large less than 5 GB. Light to Medium workload on this machine. File servers in each site store less than 500 GB data and probably won't grow to more than 1TB in the next 5 years. I am looking to go to VMWare in both sites and virtualize all servers. What recommendations do you have for server, storage hardware? Is it safe to virtualize all of your DC's? Any help or advice would be greatly appreciated. Thanks.

    Read the article

  • Possible to IPSec VPN Tunnel Public IP Addresses?

    - by caleban
    A customer uses an IBM SAS product over the internet. Traffic flows from the IBM hosting data center to the customer network through Juniper VPN appliances. IBM says they're not tunneling private IP addresses. IBM says they're tunneling public IP addresses. Is this possible? What does this look like in the VPN configuration and in the packets? I'd like to know what the source/destination ip/ports would look like in the encrypted tunneled IPSec Payload and in the IP packet carrying the IPSec Payload. IPSec Payload: source:1.1.1.101:1001 destination:2.2.2.101:2001 IP Packet: source:1.1.1.1:101 destination:2.2.2.1:201 Is it possible to send public IP addresses through an IPSec VPN tunnel? Is it possible for IBM to send a print job from a server on their network using the static-nat public address over a VPN to a printer at a customer network using the printer's static-nat public address? Or can a VPN not do this? Can a VPN only work with interesting traffic from and to private IP addresses?

    Read the article

  • Can I have 2Gbit over 1Gbit Nics

    - by Daniel
    So this really baffles me. Apparently because 1Gbit can transmit data in both directions simultaneously it should be possible to get 2Gbit of data transfer on a single NIC (1Gbit flow seend and 1Gbit receive). People claim that because 1Gbit is full-duplex (almost always) it is exactly 2Gbit in total. My intuition and electrical background tells me that something is not right here 4 twisted pairs 250Mbit capacity each gives 1Gbit. Unless it is really possible to transfer data in both directions simultaneously. I did a test with iperf. Ubuntu server 12.04 <-- MacBook Pro. Both with decent CPU speed. Tested speed of connection individually and on Mac I can see 112MB/s regardless which direction data is going. On Ubuntu with vnstat and ifstat I got 970Mbit speeds. Now, launching iperf in server mode on both machines at the same time and sending data using 2 iperf clients shows that I'm for example on Ubuntu box sending at 600Mbit, and receiving 350Mbit. which adds up to pretty much 1Gbit link. So to me there is no magical 2Gbit. Can someone confirm that or tell why I'm wrong? Another thing that confuses me i the fact that e.g. 24-port switch has for example: Throughput»up»to:»50.6Mpps Switching»capacity:»68Gbps Switch»fabric»speed:»88Gbps Which would suggest thay can handle 2GBit per port.

    Read the article

  • i7 overclocking problem

    - by benwebdev
    Hi everyone, I've got an overclocked system that seems to be misbehaving. When trying to boot up from cold the system just hangs and nothing is output to the screen. Fans are on as they should but nothing happens to finish the boot. From here I have to switch it off then on again and the boot completes. If I go into the tweak menu in the BIOS I'm informed that a boot has failed I've been in touch with Overclockers UK support a bit and theres not yet been a solution. We've mainly been tweaking the voltage for the CPU. Any suggestions? I'm new to Overclocking which is why I got a bundle with OCUK. With this issue happening on Cold Boot too its tricky to test as I have to make a change then wait till the next day. My system is here: Intel Core i7 930 2.80Ghz overclocked to 4GHz Gigabyte X58A-UD3R (BIOS Version: FC) 6GB RAM Power Supply - CoolerMaster Silent Pro M series 700W One suggestion made by OCUK was that maybe its the power supply but I'm not sure and dont have a spare - it's brand new and a pretty expensive piece of kit. Any thoughts on this? Other recomendations for Power? thanks Ben

    Read the article

  • Setup.exe called from a batch file crashes with error 0x0000006

    - by Alex
    We're going to be installing some new software on pretty much all of our computers and I'm trying to setup a GPO to do it. We're running a Windows Server 2008 R2 domain controller and all of our machines are Windows 7. The GPO calls the following script which sits on a network share on our file server. The script it self calls an executable that sits on another network share on another server. The executable will imediatelly crash with an error 0x0000006. The event log just says this: Windows cannot access the file for one of the following reasons: there is a problem with the network connection, the disk that the file is stored on, or the storage drivers installed on this computer; or the disk is missing. Windows closed the program Setup.exe because of this error. Here's the script (which is stored on \\WIN2K8R2-F-01\Remote Applications): @ECHO OFF IF DEFINED ProgramFiles(x86) ( ECHO DEBUG: 64-bit platform SET _path="C:\Program Files (x86)\Canam" ) ELSE ( ECHO DEBUG: 32-bit platform SET _path="C:\Program Files\Canam" ) IF NOT EXIST %_path% ( ECHO DEBUG: Folder does not exist PUSHD \\WIN2K8R2-PSA-01\PSA Data\Client START "" "Setup.exe" "/q" POPD ) ELSE ( ECHO DEBUG: Folder exists ) Running the script manually as administrator also results in the same error. Setting up a shortcut with the same target and parameters works perfectly. Manually calling the executable also works. Not sure if it matters, but the installer is based on dotNETInstaller. I don't know what version though. I'd appreciate any suggestions on fixing this. Thanks in advance! UPDATE I highly doubt this matters, but the network share that the script is hosted in is a shared drive, while the network share the script references for the executable is a shared folder. Also, both shares have Domain Computers listed with full access for the sharing and security tabs. And PUSHD works without wrapping the path in quotes.

    Read the article

  • VNC server failed to start CentOS

    - by Shaun
    I followed a tutorial on how to install and get VNCserver to run on CentOS 6 (since freenx isnt supported yet) and I keep getting Starting VNC server: 1:user [FAILED] How do I figure out whats going on here? Im new to Linux/CentOS and im trying to get RDP going so I can step away from SSH as much as possible (you know us Windows users love our pretty GUI's). So, where is the error log at and how do I find it? Or maybe someone else has experienced this and knows the solution based on the simple error given? After running in debug mode, here is my error + . /etc/init.d/functions ++ TEXTDOMAIN=initscripts ++ umask 022 ++ PATH=/sbin:/usr/sbin:/bin:/usr/bin ++ export PATH ++ '[' -z '' ']' ++ COLUMNS=80 ++ '[' -z '' ']' +++ /sbin/consoletype ++ CONSOLETYPE=pty ++ '[' -f /etc/sysconfig/i18n -a -z '' -a -z '' ']' ++ . /etc/profile.d/lang.sh ++ unset LANGSH_SOURCED ++ '[' -z '' ']' ++ '[' -f /etc/sysconfig/init ']' ++ . /etc/sysconfig/init +++ BOOTUP=color +++ RES_COL=60 +++ MOVE_TO_COL='echo -en \033[60G' +++ SETCOLOR_SUCCESS='echo -en \033[0;32m' +++ SETCOLOR_FAILURE='echo -en \033[0;31m' +++ SETCOLOR_WARNING='echo -en \033[0;33m' +++ SETCOLOR_NORMAL='echo -en \033[0;39m' +++ PROMPT=yes +++ AUTOSWAP=no +++ ACTIVE_CONSOLES='/dev/tty[1-6]' +++ SINGLE=/sbin/sushell ++ '[' pty = serial ']' ++ __sed_discard_ignored_files='/\(~\|\.bak\|\.orig\|\.rpmnew\|\.rpmorig\|\.rpmsave\)$/d' + '[' -r /etc/sysconfig/vncservers ']' + . /etc/sysconfig/vncservers ++ VNCSERVERS='1:larry 2:moe 3:curly' ++ VNCSERVERARGS[1]='-geometry 800x600' ++ VNCSERVERARGS[2]='-geometry 640x480' ++ VNCSERVERARGS[3]='-geometry 640x480' + prog='VNC server' + RETVAL=0 + case "$1" in + start + '[' 0 '!=' 0 ']' + . /etc/sysconfig/network ++ NETWORKING=yes ++ HOSTNAME=vps.binaryvisionaries.com ++ DOMAINNAME=server.name ++ GATEWAYDEV=venet0 ++ NETWORKING_IPV6=yes ++ IPV6_DEFAULTDEV=venet0 + '[' yes = no ']' + '[' -x /usr/bin/vncserver ']' + '[' -x /usr/bin/Xvnc ']' + echo -n 'Starting VNC server: ' Starting VNC server: + RETVAL=0 + '[' '!' -d /tmp/.X11-unix ']' + for display in '${VNCSERVERS}' + SERVS=1 + echo -n '1:larry ' 1:larry + DISP=1 + USER=larry + VNCUSERARGS='-geometry 800x600' + runuser -l larry -c 'cd ~larry && [ -r .vnc/passwd ] && vncserver :1 -geometry 800x600' + RETVAL=1 + '[' 1 -eq 0 ']' + break + '[' -z 1 ']' + '[' 1 -eq 0 ']' + failure 'vncserver start' + local rc=1 + '[' color '!=' verbose -a -z '' ']' + echo_failure + '[' color = color ']' + echo -en '\033[60G' + echo -n '[' [+ '[' color = color ']' + echo -en '\033[0;31m' + echo -n FAILED FAILED+ '[' color = color ']' + echo -en '\033[0;39m' + echo -n ']' ]+ echo -ne '\r' + return 1 + '[' -x /usr/bin/plymouth ']' + /usr/bin/plymouth --details + return 1 + echo + '[' 1 -eq 98 ']' + return 1 + exit 1

    Read the article

< Previous Page | 557 558 559 560 561 562 563 564 565 566 567 568  | Next Page >