Search Results

Search found 10131 results on 406 pages for 'natural sort'.

Page 94/406 | < Previous Page | 90 91 92 93 94 95 96 97 98 99 100 101  | Next Page >

  • Improving browser performance while using lots of tabs?

    - by Andrew
    My browsing habits cause me to open lots of windows and tabs, either related to different projects I'm working on or things I may want to read later. I use OSX and use about 5 spaces with multiple windows in each space. The problem is eventually I'll have around 200 or more tabs open (spread over 15-20 windows) that I don't want to close. Needless to say, my computer's performance starts to degrade. As I write this on my mobile, Safari on my laptop is locking up the computer. I used to use Chrome but found better performance with Safari. What I'd like to know, is there a graph of browser performance based on tab usage? I don't need a browser that keeps all tabs active. It would be great if the browser could increase performance by "putting tabs to sleep". Or if there was some sort of tool for saving a "workspace" of tabs that you could reactivate the next time you are working on that project. What sort of solution can you recommend to solve this problem?

    Read the article

  • dovecot login issue with plain passwords

    - by user3028
    I am having an odd problem in dovecot, the first time I try to login via telnet dovecot gives a error, the second time it works, both within the same telnet session. This is the telnet session, note the 'BAD Error in IMAP command received by server' and the "a OK" just after that : telnet 192.168.1.2 143 * OK Waiting for authentication process to respond.. * OK [CAPABILITY IMAP4rev1 LITERAL+ SASL-IR LOGIN-REFERRALS ID ENABLE IDLE STARTTLS AUTH=PLAIN] Dovecot ready. a login someUserLogin supersecretpassword * BAD Error in IMAP command received by server. a login someUserLogin supersecretpassword a OK [CAPABILITY IMAP4rev1 LITERAL+ SASL-IR LOGIN-REFERRALS ID ENABLE IDLE SORT SORT=DISPLAY THREAD=REFERENCES THREAD=REFS MULTIAPPEND UNSELECT CHILDREN NAMESPACE UIDPLUS LIST-EXTENDED I18NLEVEL=1 CONDSTORE QRESYNC ESEARCH ESORT SEARCHRES WITHIN CONTEXT=SEARCH LIST-STATUS] Logged in dovecot configuration >dovecot -n # 2.0.19: /etc/dovecot/dovecot.conf # OS: Linux 3.5.0-34-generic x86_64 Ubuntu 12.04.2 LTS auth_debug = yes auth_verbose = yes disable_plaintext_auth = no login_trusted_networks = 192.168.1.0/16 mail_location = maildir:~/Maildir passdb { driver = pam } protocols = " imap" ssl_cert = </etc/ssl/certs/dovecot.pem ssl_key = </etc/ssl/private/dovecot.pem userdb { driver = passwd } This is the log file: Jul 3 12:27:51 linuxServer dovecot: auth: Debug: Loading modules from directory: /usr/lib/dovecot/modules/auth Jul 3 12:27:51 linuxServer dovecot: auth: Debug: auth client connected (pid=23499) Jul 3 12:28:06 linuxServer dovecot: auth: Debug: client in: AUTH#0111#011PLAIN#011service=imap#011secured#011no-penalty#011lip=192.168.1.2#011rip=192.169.1.3#011lport=143#011rport=50438#011resp=<hidden> Jul 3 12:28:06 linuxServer dovecot: auth-worker: Debug: Loading modules from directory: /usr/lib/dovecot/modules/auth Jul 3 12:28:06 linuxServer dovecot: auth-worker: Debug: pam(someUserLogin,192.169.1.3): lookup service=dovecot Jul 3 12:28:06 linuxServer dovecot: auth-worker: Debug: pam(someUserLogin,192.169.1.3): #1/1 style=1 msg=Password: Jul 3 12:28:06 linuxServer dovecot: auth: Debug: client out: OK#0111#011user=someUserLogin Jul 3 12:28:06 linuxServer dovecot: auth: Debug: master in: REQUEST#0111823473665#01123499#0111#0113a58da53e091957d3cd306ac4114f0b9 Jul 3 12:28:06 linuxServer dovecot: auth: Debug: passwd(someUserLogin,192.169.1.3): lookup Jul 3 12:28:06 linuxServer dovecot: auth: Debug: master out: USER#0111823473665#011someUserLogin#011system_groups_user=someUserLogin#011uid=1000#011gid=1000#011home=/home/someUserLogin Jul 3 12:28:06 linuxServer dovecot: imap-login: Login: user=<someUserLogin>, method=PLAIN, rip=192.169.1.3, lip=192.168.1.2, mpid=23503, secured

    Read the article

  • gitolite mac don't add new user to authorized_keys

    - by crashbus
    I installed gitolite and every thing works fine for me as admin. But when I'd like to add add a new user the new user can't connect to the server. After I looked into the file authorized_keys I saw that the new user wasn't added to the file. During the commit of the new public-key I get some workings: WARNING: split conf not set, gl-conf present for 'gitolite-admin' Counting objects: 6, done. Delta compression using up to 8 threads. Compressing objects: 100% (4/4), done. Writing objects: 100% (4/4), 882 bytes, done. Total 4 (delta 1), reused 0 (delta 0) remote: WARNING: split conf not set, gl-conf present for 'gitolite-admin' remote: WARNING: ?? @staff christianwaldmann markwelch remote: sh: find: command not found remote: sh: find: command not found remote: sh: sort: command not found remote: sh: find: command not found remote: /usr/local/bin/triggers/post-compile/update-gitweb-access-list: line 26: cut: command not found remote: /usr/local/bin/triggers/post-compile/update-gitweb-access-list: line 23: grep: command not found remote: /usr/local/bin/triggers/post-compile/update-gitweb-access-list: line 26: sort: command not found remote: /usr/local/bin/triggers/post-compile/update-gitweb-access-list: line 26: sed: command not found remote: sh: find: command not found remote: sh: find: command not found How can I fix it that gitolite auto-add the new user to the authorized_keys.

    Read the article

  • Handling the Outlook 2007 AutoArchive PST file

    - by Doug Luxem
    We encourage our users to enable AutoArchive in Outlook 2007 as a way to manage their mailbox sizes. However, we frequently end up running in to problems with the archive.pst file that is generated. The two main problems we have are: The archive.pst file is located in the user's local profile directory and is never backed up. A dead hard drive or stolen laptop could result in months or years of missing email. All other personal data is stored on network shares, but we can't do that for Outlook PST files. Without some sort of manual intervention, the archive will grow to enormous sizes. Although Outlook 2007 SP2 handles the large files better than before, it still results in slow response times from Outlook and an increase likelihood of a corrupt PST file. To mitigate these problems personally, I move the archives to a c:\Outlook folder and manually back that up to a shared drive every month or so. Additionally, I rotate archive files every year so that I have one file for each year (archive2008.pst, etc). Obviously, asking our users to do this same wouldn't help much. We need some sort of automated solution to take care of points 1 and 2. I have to imagine this is a common problem for Exchange organizations, so what is the best method to handle this?

    Read the article

  • What is causing sudden freezing during running real-time program?

    - by Trevor Boyd Smith
    So I run a high intensive (CPU/GPU) real-time program. During normal execution suddenly everything freezes for 1-4 seconds. I opened "Process Explorer" in the background to help gain insight and maybe identify something. Here is what the CPU/GPU graphs looks like when I align them in time: Notice the 4 distinct drops in both the CPU/GPU. You can see that it goes from some sort of positive CPU/GPU usage to almost zero. These drops in the graph align with when the real-time program suddenly freezes. How do I find what is causing these sudden drops? NOTE: When you put your mouse over the graph it tells you the time, accurate to the second, for where your cursor is. Maybe this mouse over feature could be helpful in some way (e.g. what if you had a log of all processes every 100ms). EDIT: The real-time program is a video game and so I can't watch some sort of instrumentation while the video game is running. I need a solution that let's you look back in time somehow to see what was happening when the slow down occurred. EDIT: RE - Recording Data vs using real-time monitor: So the windows performance recorder is for some reason not recording what I expect it to record. So I switched to using "perfmon" and then opening it's "resource monitor". RE - Setting it up so I can view real-time monitor: In the video game I set it to spectate and then put the video game in "windowed" mode so that I can view the real time display that Resource Monitor has. Now that I can get semi-real time (only once per second... how do you get more than once per second?) I started looking at the various real time data readouts. Getting to the cause: I noticed a strong correlation in high disk IO and low CPU usage (which is also seen by having in-game freezing). How do you use resource monitor to find out who is doing all this offending disk IO?

    Read the article

  • Tools for displaying a multidimensional data table?

    - by ShreevatsaR
    [Apologies if this sort of question is off-topic for SuperUser. Please redirect to the right place if so.] There is a 3-dimensional array of values. (That is, instead of a table/2-dimensional array with values in a grid, the values can be thought of in a cube instead.) Is there a way to display this "cube" interactively, ideally on a webpage? Specifically, given the data, it would work something like this: the user selects two of the 3 variables. He then sees a "stack" of tables, one for each value of the third variable (cross-sections, in other words). By selecting the appropriate table from the stack, he can see the (i,j,k) value he wants. The "technology" for displaying such a thing (stacked tables, rotation, etc.) already exists, so this seems the sort of thing that someone ought to have written already. To be clear: I don't need sophisticated graphics necessarily, just the ability to select from cross-sections of variables. But I have no experience with (say, for displaying on a webpage) what web gadgets exist, so I'm clueless how to even search for one. (Google searches like "multidimensional data visualization" didn't throw up anything useful. Google Spreadsheets can do a few kinds of charts which can be embedded on a webpage, but I cannot tell if this is one of them.) [I can imagine how it ought to work for higher dimensions. For four-dimensions, instead of selecting just a stack, you'd first select an (i,j) from an "outer table", which would show all (k,l) values for that (i,j). For higher dimensions, inductively: you select (i,j), and then repeat what you'd do with 2 fewer dimensions.] So has this been written? Is this easy to write? Where ought one to look for such a thing?

    Read the article

  • Multidimensional data table?

    - by ShreevatsaR
    [Apologies if this sort of question is off-topic for SuperUser. Please redirect to the right place if so.] There is a 3-dimensional array of values. (That is, instead of a table/2-dimensional array with values in a grid, the values can be thought of in a cube instead.) Is there a way to display this "cube" interactively, ideally on a webpage? Specifically, given the data, it would work something like this: the user selects two of the 3 variables. He then sees a "stack" of tables, one for each value of the third variable (cross-sections, in other words). By selecting the appropriate table from the stack, he can see the (i,j,k) value he wants. The "technology" for displaying such a thing (stacked tables, rotation, etc.) already exists, so this seems the sort of thing that someone ought to have written already. To be clear: I don't need sophisticated graphics necessarily, just the ability to select from cross-sections of variables. But I have no experience with (say, for displaying on a webpage) what web gadgets exist, so I'm clueless how to even search for one. (Google searches like "multidimensional data visualization" didn't throw up anything useful. Google Spreadsheets can do a few kinds of charts which can be embedded on a webpage, but I cannot tell if this is one of them.) [I can imagine how it ought to work for higher dimensions. For four-dimensions, instead of selecting just a stack, you'd first select an (i,j) from an "outer table", which would show all (k,l) values for that (i,j). For higher dimensions, inductively: you select (i,j), and then repeat what you'd do with 2 fewer dimensions.] So has this been written? Is this easy to write? Where ought one to look for such a thing?

    Read the article

  • Checking partition alignment with PowerCLI

    - by Julian
    I'm trying to verify that the file system partitions within each of the servers I'm working on are aligned correctly. I've got the following script that when I've tried running will either claim that all virtual servers are aligned or not aligned based on which if statement I use (one is commented out): $myArr = @() $vms = get-vm | where {$_.PowerState -eq "PoweredOn" -and $_.Guest.OSFullName -match "Microsoft Windows*" } | sort name foreach($vm in $vms){ $wmi = get-wmiobject -class "win32_DiskPartition" -namespace "root\CIMV2" -ComputerName $vm foreach ($partition in $wmi){ $Details = "" | Select-Object VMName, Partition, Status #if (($partition.startingoffset % 65536) -isnot [decimal]){ if ($partition.startingoffSet -eq "65536"){ $Details.VMName = $partition.SystemName $Details.Partition = $partition.Name $Details.Status = "Partition aligned" } else{ $Details.VMName = $partition.SystemName $Details.Partition = $partition.Name $Details.Status = "Partition not aligned" } $myArr += $Details } } $myArr | Export-CSV -NoTypeInformation "C:\users\myself\Documents\Scripts\PartitionAlignment.csv" Would anyone know what is wrong with my code? I'm still learning about partitions so I'm not sure how I need to check the starting off-set number to verify alignment. EDIT: $myArr = @() $vms = get-vm | where {$_.PowerState -eq "PoweredOn" -and $_.Guest.OSFullName -match "Microsoft Windows*" } | sort name $wmi = get-wmiobject -class "win32_DiskPartition" -namespace "root\CIMV2" -ComputerName $vm #foreach ($_ In Get-WMIObject Win32_DiskPartition | Select Name, BlockSize, NumberOfBlocks, StartingOffSet, @{n='Alignment'; e={$_.StartingOffSet/$_.BlockSize}}) {$_} foreach ($wmi| Select Name, BlockSize, NumberOfBlocks, StartingOffSet, @{n='Alignment'; e={$_.StartingOffSet/$_.BlockSize}}) {$_}

    Read the article

  • How Do I Stop NFS Clients from Using All of the NFS Server's Resources?

    - by Ken S.
    I have a v4 NFS server running on Ubuntu 12.04LTS. It is the main repository for the web assets that four external nginx webservers mount to serve up to site visitors. These client servers connect to it via a read-only mount. Each of these RO servers has this displayed when I check the mounts: 10.0.0.90:/assets on /var/www/assets type nfs4 (ro,addr=10.0.0.90,clientaddr=0.0.0.0) The NFS master's /etc/exports file contains entries like this for each server: /mnt/lvm-ext4 10.0.0.40(ro,fsid=0,insecure,no_subtree_check,async) The problem that I'm seeing is that these clients are eventually utilizing all the RAM on the NFS server and causing it to crash. If I do a watch free -m I can watch the used memory creep up until it's used and then see the free buffers/cache entry creep down to near zero before the server eventually locks up requiring a reboot. There is some sort of memory leak somewhere that is causing this, and the optimal solution would be to find it and fix it, but in the meantime I need to find a way to have the NFS server protect itself from connected clients using all it's RAM. There must be some sort of setting that limits the resources the clients can use, but I can't seem to find it. I've tried adjusting the values for rsize and wsize but they don't seem to help or be related. Thanks for any tips.

    Read the article

  • 802.11g -> wired ethernet bridging not working

    - by Malachi
    Usually people want to go the other direction, but I want to take our relatively fast and stable house 802.11g signal and bridge it to ethernet. I have tried using an Airport Express (the b/g flavor) and my i7 MacBook pro, both to no avail. Word is that the b/g flavor of This flavor of Airport Express maxes at firmware 6.3 which doesn't support this kind of bridging properly. However, I expected my MacBook pro to do the job with its "Internet Sharing" feature. Alas, although my wired PC does sort of see it, it doesn't work out. Strangely, using DHCP the PC receives the same IP address as my MBP uses on the network. Less strangely, but still surprisingly, the wired ethernet port on my mac registers as the IP address of the gateway when queried with IFCONFIG. It sort of makes sense that the mac would "pretend" to be the gateway, but the whole thing just isn't working and seems configured wrong - but all the docs I see say basically "OS X Internet Sharing: click it and go". What do I do? Do i really have to buy more hardware, even though I have plenty of would-be candidates for bridging? Incidentally, the host router originating the 802.11g signal is a belkin 802.11g router, and is documented to support WDS.

    Read the article

  • Photo managing software that supports network drives?

    - by musicfreak
    My dad is a photographer in his free time, and he's been using Lightroom to manage his photos. However, recently, we put all of our photos on a NAS drive to allow us to access them from any computer at any time. The problem with this is that Lightroom cannot load catalogs from network drives. We need support for network drives because we'd like to be able to browse the photos from any computer, and for any computer to be able to add photos to the collection. Right now we're just syncing the Lightroom catalog file between us, but the extra step is a pain, and doing it manually makes it error-prone. Is there any software (free or commercial) that has proper support for network drives? The only real feature I need is to be able to sort photos by date and by some sort of tags. I don't need any editing features like those found in Lightroom; my dad is comfortable using Photoshop to edit photos. Also, if there is another solution to this that I haven't thought of, feel free to share.

    Read the article

  • Generic/Text Printer on Windows 7 not prompting for file name

    - by Trevor Tippins
    Hope someone can shed some light on this. I am downloading reports from an AIX-based system by directing them to a TT printer which the terminal emulator (MultiView 2000) intercepts and directs to the default printer on the local system. This local printer is configured as a vanilla Generic/Text printer attached to a FILE port. When I print from AIX, the output is spooled down and the local printer prompts for a file name into which to save the file...but not under Windows 7. This has worked fine for many years, on both Win2K and WinXP. However, on Windows 7 the output gets spooled as a file into spool\PRINTERS (and looks as expected) but the print job then hangs with a status of "Error - Printing" and never prompts for a file name. I have to cancel the job. The Generic/Text printer works as expected with other applications. I have tried setting the printer to print directly rather than spooling but this only serves to hang the terminal session too. I've also tried to run the emulator in Windows 2000 Compatibility Mode and as Administrator in case it was something like that but with no luck. As you might expect, it does work fine in XP Mode (as long as I print to a printer defined therein and not the host's printer) but operationally this isn't going to be an option. Obviously this emulation software is a decade old (at least) and I could just cross/upgrade all the users (at a cost) but, before I do so, has anyone seen this sort of behaviour before and found some sort of fix? Remote OS: AIX 5 Client OS: Windows 7 Pro (32-bit) Printer: Generic/Text on a FILE port TE Software: MultiView 2000 (32-bit) Thanks in advance.

    Read the article

  • Give Access to a Subdirectory Without Giving Access to Parent Directories

    - by allquixotic
    I have a scenario involving a Windows file server where the "owner" wants to dole out permissions to a group of users of the following sort: \\server\dir1\dir2\dir3: Read & Execute and Write \\server\dir1\dir2: No permissions. \\server\dir1: No permissions. \\server: Read & Execute To my understanding, it is not possible to do this because Read & Execute permission must be granted to all the parent directories in a directory chain in order for the operating system to be able to "see" the child directories and get to them. Without this permission, you can't even obtain the security context token when trying to access the nested directory, even if you have full access to the subdirectory. We are looking for ways to get around this, without moving the data from \\server\dir1\dir2\dir3 to \\server\dir4. One workaround I thought of, but which I am not sure if it will work, is creating some sort of link or junction \\server\dir4 which is a reference to \\server\dir1\dir2\dir3. I am not sure which of the available options (if any) would work for this purpose if the user does not have Read & Execute permission on \\server\dir1\dir2 or \\server\dir1, but as far as I know, the options are these: NTFS Symbolic Link, Junction, Hard Link. So the questions: Are any of these methods suitable to accomplish my goal? Are there any other methods of linking or indirectly referencing a directory, which I haven't listed above, which might be suitable? Are there any direct solutions that don't involve granting Read & Execute to \\server\dir1 or \\server\dir2 but still allowing access to \\server\dir1\dir2\dir3?

    Read the article

  • Performance experiences for running Windows 7 on a Thin-Client?

    - by Peter Bernier
    Has anyone else tried installing Windows 7 on thin-client hardware? I'd be very interested to hear about other people's experiences and what sort of hardware tweaks they had to do to get it to work. (Yes, I realize this is completely unsupported.. half the fun of playing with machines and beta/RC versions is trying out unsupported scenarios. :) ) I managed to get Windows 7 installed on a modified Wyse 9450 Thin-Client and while the performance isn't great, it is usable, particularly as an RDP workstation. Before installing 7, I added another 256Mb of ram (512 total), a 60G laptop hard-drive and a PCI videocard to the 9450 (this was in order to increase the supported screen resolution). I basically did this in order to see whether or not it was possible to get 7 installed on such minimal hardware, and see what the performance would be. For a 550Mhz processor, I was reasonably impressed. I've been using the machine for RDP for the last couple of days and it actually seems slightly snappier than the default Windows XP embedded install (although this is more likely the result of the extra hardware). I'll be running some more tests later on as I'm curious to see particularl whether the streaming video performance will improve. I'd love to hear about anyone's experiences getting 7 to work on extremely low-powered hardware. Particularly any sort of tweaks that you've discovered in order to increase performance..

    Read the article

  • From a performance standpoint, is there a preferred way to set up Thunderbird message filters?

    - by Steve V.
    When I used Kontact, I used filters heavily to sort my incoming email into various folders. Since switching to Thunderbird, I've been slowly recreating these filters as new mail arrives. This seems like the perfect time to rethink how I use message filters. The way I see it, there are two basic ways to filter messages. Either I can have lots of filters, or I can have lots of criteria. An example is in order. Assume that I get emails from four people (Boss, CEO, Intern, and StackOverflow) that I want to sort into two folders, "Stack" and "Work" Option A: Filter 1: if FROM contains "Boss" -> move to "Work" Filter 2: if FROM contains "CEO" -> move to "Work" Filter 3: if FROM contains "Intern" -> move to "Work" Filter 4: if FROM contains "StackOverflow" -> move to "Stack" Option B: Filter 1: if FROM contains "Boss" OR FROM contains "CEO" OR FROM contains "Intern" -> move to "Work" Filter 2: if FROM contains "StackOverflow" -> move to "Stack" Assuming that when I'm done, I'll have about a hundred different criteria to filter on, is one of these methods better than the other from a performance standpoint?

    Read the article

  • Steps to debugging web site latency and timeout issues

    - by Paperjam
    I have a client who has multiple offices around the country, all of which share the same Internet connection via their WAN. One specific office for this client is experiencing severe latency and timeout issues with my web site. Most, but not all, of the latency occurs on a specific ASPX page where multiple postbacks are made while populating cascading dropdown lists (rapid form submits). The latency is sporadic and can be anywhere from a few seconds to a full timeout. There is no indication that the timeouts are occurring on the server's end. The IT guy for this client is having trouble narrowing down the problem. Since it is affecting only one location for one client, I am led to believe it is not something with my site but something specific to that location. He's measured ping times while using the site and has noticed no real variance in ping times even when the page has timed out. I believe this may be being caused by some sort of Internet filter that doesn't like rapid form submits, but beyond a hunch I haven't a clue. My question is what things should I tell the IT guy to look for? While I'm not trying to provide active tech support for this issue, I would at least like to glean an understanding of what is going on and try to offer some sort of advice. Thank you.

    Read the article

  • Enterprise Redirection Services?

    - by Aaron Alton
    This is probably a case of "if I new what it was called, I could google it in 5 minutes" - but I don't know what it's called. It's probably best to explain the requirement using an example. We have a number of services (vpn, owa, etc) which we host from one of our datacenters. We have a number of datacenters, and we technically have the infrastructure already in place to support these services at a number of our datacenters. To provide access to these "services", I would create an external DNS entry (ex. VPN.MyCompany.com Gateway IP for one of my DCs), and clients will connect to it via the DNS entry. Since I have multiple datacenters that can support this service, I could theoretically offer a "highly available, geographically dispersed" solution if I could point this DNS entry to some sort of third party who offers highly available "redirection" services. If my primary site goes down, I could just make a change via some management console and configure the redirector to point to a different DC. Of course, it would be fairly straightforward to set this sort of thing up on one of our servers, but that would kinda defeat the purpose of a highly available third party. Is anyone familiar with a service like this? I'm thinking something like DynDNS, but with Enterprise availability guarantees.

    Read the article

  • Data recovery; nearly 1 tb of movies on a WD 3.5 tb personal cloud drive disappears with scanty traces

    - by Effector Dhanushanth
    I have a great collection of movies that I had stored in a logical mesh of folder on my 3.5 tb WD personal cloud drive. I woke up 1 morning and found that everything was fine with my data on this drive, except for my movie collection: There were two great folders, one "2sort" nd the other "segregated". out of all the segregated sub folders, only letter C D and 2 or 3 others remain. and the 2 sort folder, which has umpteen subfolders, amounting to more than 0.5 tb. is.. it's just gone!! this is a great downfall.. now this is a personal cloud drive and has no usb port etc. unfortunately to hardwire and recover files.. now I'm sure there are softwares out there that can help me recover my beloved movies from such an interestingly "hard-to-reach" (should I say?) device? what may that software be compadre, my happiness lies within your answer.. thank you.. remember, recovery software or (WD) personal cloud. :) these ovies were All, "hand-picked", over the course of ten years.. I just never catalogued my collection.. if I could just get the "list" of my lost collection, that'd be enough.. recovering em would be a bonus.. but they out to be damaged if I were to somehow recover you know? still, I'm certain they're all intact.. I guess the file index just got corrupted.. There surely is a veil of some sort that need to be thrown or pushed aside to reveal my movies.. what software can do/does that? thanks immensely!

    Read the article

  • Mac always boots with incorrect display gamma (for years now including Lion)

    - by Alex Wayne
    I think somewhere, something got installed but I have no idea what or how to fix it :( Basically, my old MacBook Pro running 10.5 Leopard had a problem where on boot it would show everything on the screen in a very sort of crunched color space. Everything below 15% white would just be pure black, everything above 85% white would be pure white and all colors look to be a touch more saturated. It's garish. To fix it, I found that I could boot into almost any fullscreen 3D game. When the game launches, the colors would still be off, but when I then quite the game and return the desktop everything is normal again. I've noticed Blizzard games work most reliably for this (World of Warcraft or Starcraft2). This problem has followed me through the years. When I upgraded to an iMac I migrated everything over to it, and the issue now happens on the iMac too. I then got a new MacBook Pro for work and migrated my iMac over to that, and it has the problem too. I had thought that it was an OS bug, but upgrading to 10.6 Snow Leopard didn't fix it and neither did 10.7 Lion. Furthermore I can't find any reference on any forum or help site where anyone else has this problem. If anyone has any idea what processes or settings or apps I should look at to figure out why this is happening I should would appreciate it! It looks sort of irresponsible when I open my laptop in the office to work and then boot up Starcraft 2 full screen...

    Read the article

  • How should an experienced Windows SysAdmin learn Linux? [closed]

    - by Systemspoet
    I have a new hire starting in a few weeks who is an experienced Windows SysAdmin. I think he's fairly senior on the Windows side, with a pretty deep AD understanding and experience with Exchange 2007, 2010, and exchange migrations. He's done a little PowerShell but I suspect more of the "run this command to do this" variety then "write a script to do this" sort. However, we are a mixed shop and (he knows this) I expect him to become a reasonably competent Linux SysAdmin over time. I'm looking for good starting points to bring him along. I have over ten years of Linux/UNIX experience, so it all sort of seems intuitive to me, but I've been thinking about the toolkit you actually need to be productive in the Linux CLI world. Just to be able to use the machines at all, off the top of my head... vi Basic CLI stuff -- move around, rename files, copy files, tar, gzip, changing passwords, finding relevant manpages, keep track of where you are, find things in your history, etc, etc. More advanced things that I take for granted but are actually pretty hard -- doing things with 'find', extracting relevant text via 'awk' and/or 'cut', knowing when to use 'grep' and when to use 'grep -e' or 'egrep'. Distribution specific stuff... compiling software, rpm, yum, apt-get, you name it. This all seems pretty basic to me, but when I think back to 1995 when I was first learning my way, some of those things took me years to master. So my question is -- where should I send him to pick up those skills? I'm not just thinking of classes, but rather also websites and books? Where do you all suggest as a starting point for picking up Linux skills?

    Read the article

  • ZFS & Deduplicating FLAC Data

    - by jasongullickson
    I'm experimenting with using ZFS to deduplicate a large library of FLAC files. The purpose of this is twofold: Reduce storage utilization Reduce bandwidth needed to sync the library with cloud storage Many of these files are of the same music tracks but from different physical media. This means that for the most part they are the same and usually close to the same size, which makes me think that they should benefit from block-level deduplication. However in my testing I'm not seeing good results. When I create a pool and add three of these tracks (identical songs from different source media) zpool list reports 1.00 dedupe. If I copy all of the files (make exact duplicates of the three) dedupe climbs, so I know that it is enabled and functioning, but it's not finding any duplication in the original collection of files. My first thought was that perhaps some of the variable header data (metadata tags, etc.) might be mis-aligning the bulk of the data in these files (the audio frames) but even making the header data consistent across the three files doesn't seem to have any impact on deduplication. I'm considering taking alternate routes (testing other dedupe filesystems as well as some custom code) but since we're already using ZFS and I like the ZFS replication options, I'd prefer to use ZFS dedupe for this project; but perhaps it's simply not capable of working well with this sort of data. Any feedback regarding tuning that might improve dedupe performance for this sort of dataset, or confirmation that ZFS dedupe is not the right tool for this job are appreciated.

    Read the article

  • Identifying test machines in analytics logs

    - by RTigger
    We're just beginning to add analytics to our SaaS application, to begin (among other things) billing clients based on usage. The problem we're running into is there's a few circumstances where our support team will simulate a log in into production to try to reproduce reported issues with a client's configuration. When they log in, an entry will be made into our analytics logs that their specific account has logged in, which we use to calculate billing. A few ideas we had to solve this: 1) We log IP addresses as well as machine keys for each PC that logs in - we could filter out known IP addresses and/or machine keys belonging to support. The drawback is we have to maintain a list of keys / addresses manually. 2) If support (or anyone else internal) runs our application in debug mode (as opposed to release), it will not report analytics. This is fine, as long as support / anyone else remembers to switch to debug mode. 3) Include some sort of reg key / similar setting required to be set when configuring a production system in order to send analytics. Again, fine, as long as our infrastructure team remembers to set the reg key or setting. All of these approaches require some sort of human involvement, which we all know can be iffy at best. Has anyone run into a similar situation? Is there an automated approach to this problem? (PS Of course, we shouldn't be testing in production, but there are a few one-off instances with customer set up that we can't reproduce without logging in as them in production. This is the only time we do so, and this is the case I'm talking about in this question.)

    Read the article

  • Urgent: how to deny read access to a ExecCGI directory

    - by Malvolio
    First, I can't believe that that isn't the default behavior. Second, yikes! I don't know how long my code's been hanging out there, with all sort of cool secret stuff, just waiting for some hacker who knows Apache better than I do. EDIT (and apology) Well, this is sort of embarrassing. Here's what happened: We had some Python scripts available to the web, at /aux/file.py, which were not surprisingly at /var/www/http/aux . Separately, we were running an app server and Apache proxies through at /servlets/. A contractor had constructed the WAR file by bundling up all the generated files including the Python files (which are in a directory also called aux, not surprisingly), so if you typed in /servlets/aux/file.py, the web-server would ask the app-server for it and the app-server would just supply the file. It was the latter URL that this morning I happened to type in by accident and lo, the source appeared. Until I realized the shear unlikelihood of what I had done, the situation was rating about 8.3 on the sphincter scale. After a tense half-hour or so I realized that it had nothing to do with the CGI (and that serving files that were also executable would be not only foolish but also impossible), and was able to address the real problems. So -- sorry, everybody. Let the scorn-fest commence.

    Read the article

  • where is my disk space?

    - by user166241
    I recently had a problem with .xsession-errors file - it became very big ( 90GB) and took all disk space: How I can check what takes disk space in /tmp?. I cleaned it with command > .xsession-errors but after an hour it became large again. So I deleted it (rm .xsession-errors) - it helped because it wasn't recreated but again after hour disk space disappeared - now there is no .xsession-errors anymore but I don't know where is the memory: df Filesystem 1K-blocks Used Available Use% Mounted on /dev/sda1 106640456 101223392 4 100% / udev 8166744 8 8166736 1% /dev tmpfs 3270224 972 3269252 1% /run none 5120 0 5120 0% /run/lock none 8175552 152 8175400 1% /run/shm du -sc * .[^.]* | sort -n 0 initrd.img 0 initrd.img.old 0 proc 0 sys 0 vmlinuz 0 vmlinuz.old 4 cdrom 4 lib64 4 media 4 mnt 4 selinux 8 dev 12 srv 16 lost+found 68 tmp 1124 run 3396 lib32 5164 .rpmdb 5540 root 8888 sbin 9120 bin 17132 etc 106080 opt 116956 boot 861908 lib 3530584 usr 3821836 var 13371260 home 21859112 total So there is around 100GB used but executing du -sc * .[^.]* | sort -n in root directory finds only ~21 GB - so what takes 80GB?? How to check it? I suspect that when I deleted the `.xsession-errors' file the errors were redirected somwhere else - but where?

    Read the article

  • Vim: How to join multiples lines based on a pattern?

    - by ryz
    I want to join multiple lines in a file based on a pattern that both lines share. This is my example: {101}{}{Apples} {102}{}{Eggs} {103}{}{Beans} {104}... ... {1101}{}{This is a fruit.} {1102}{}{These things are oval.} {1103}{}{You have to roast them.} {1104}... ... I want to join the lines {101}{}{Apples} and {1101}{}{This is a fruit.} to one line {101}{}{Apples}{1101}{}{This is a fruit.} for further processing. Same goes for the other lines. As you can see, both lines share the number 101, but I have no idea how to pull this off. Any Ideas? /EDIT: I found a "workaround": First, delete all preceding "{1" characters from group two in VISUAL BLOCK mode with C-V (or similar shortcut), then sort all lines by number with :%sort n, then join every second line with :let @q = "Jj" followed by 500@q. This works, but leaves me with {101}{}{Apples} 101}{}{This is a fruit.}. I would then need to add the missing characters "{1" in each line, not quite what I want. Any help appreciated.

    Read the article

< Previous Page | 90 91 92 93 94 95 96 97 98 99 100 101  | Next Page >