Search Results

Search found 11147 results on 446 pages for 'background foreground'.

Page 332/446 | < Previous Page | 328 329 330 331 332 333 334 335 336 337 338 339  | Next Page >

  • What's the state of the art in image upscaling?

    - by monov
    I like to collect cool pics and use them as wallpapers or for other things. Often, artists publish only low-res versions, probably for fear of theft. Example: Gabriel Pulecio's BIRDS Now, if I want to use that as a wallpaper, I'd have to upscale it, and obviously that'd make it look blurry because of the bicubic interpolation. I realize there's no real way to get a high-res version from a low-res pic, because the information is not simply there. That said, I'm wondering if heuristics have been developed for upscaling with less apparent loss of quality. Those would probably be optimized for specific image types. For photorealistic pictures, for cartoons with large flat areas, for pixel art... One algorithm I'm aware of is Seam Carving. It works for some kinds of pics, especially ones with a plain, undetailed or uninteresting background, and a subject that strongly stands out. But it's far from being general-purpose. Applying it to the above pic produces this. It looks quite sharp, but the proportions are horribly distorted because the algorithm is not designed for this kind of pic. Another is Pixel art scaling algorithms. Those are completely unfit for anything other than actual pixel art that's pixelized to begin with. For example, I tried the scale2x windows binary on my pic, but its output was nearly indistinguishable from nearest-neighbour scaling because the algorithm didn't detect any isolated pixely fragments to work from. Something else I tried was: I enlarged the image in Photoshop with bicubic interpolation, then I applied unsharp mask. The result looks pretty bad. The red blotch is actually resized reasonably well, but the dove is far from it. What I'm looking for is some app that makes a best-effort attempt at upscaling any input image while minimizing blurriness. If you know of any, I'll be thankful. Note that the subjective prettiness and sharpness of the result is what matters... the result doesn't need to be completely faithful to the original small image.

    Read the article

  • Home server hard drive: 186k start-stop cycles in 325 days?

    - by j-g-faustus
    I set up a home server about a year ago, using Ubuntu server (10.04 LTS at the moment), four disks in RAID 5 for storage (WD Green 1.5 TB) and a laptop drive for the OS. Today the output of smartctl, a command line utility for checking the SMART attributes of a hard drive, tells me that the primary OS drive has had no less than 186,000 start-stop cycles in 325 days and may be nearing the end of its lifespan. The smartctl output is in "normalized values", in this case a number between 200 and 000, where 200 is "brand new" and 000 means "worn out". My disk gets 001. So I wonder what happened: 186k start/stop cycles in 7820 hours is about one start/stop per 2.5 minutes around the clock. This seems somewhat excessive for a computer that sees actual use once or twice per day. (The RAID disks are normal, averaging to one start/stop per day, as expected.) Does anyone have similar experiences, or pointers to what might be the issue here? Specifically I'd like to know Why the massive start/stop count? Do I have some sort of configuration issue? Could there be a background service that is causing trouble? Could having a laptop disk as the OS drive be part of the problem? Can anyone confirm or deny this? Here is the /etc/hdparm.conf configuration /dev/sda { apm = 127 spindown_time = 120 } and the most relevant parts of smartctl --attributes /dev/sda: smartctl version 5.38 [x86_64-unknown-linux-gnu] Copyright (C) 2002-8 Bruce Allen === START OF READ SMART DATA SECTION === SMART Attributes Data Structure revision number: 16 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x002f 200 200 051 Pre-fail Always - 0 4 Start_Stop_Count 0x0032 001 001 000 Old_age Always - 185875 9 Power_On_Hours 0x0032 090 090 000 Old_age Always - 7820 12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 109 193 Load_Cycle_Count 0x0032 118 118 000 Old_age Always - 246833 194 Temperature_Celsius 0x0022 107 098 000 Old_age Always - 36 As I generally prefer my drives to last more than a year, any advice is appreciated.

    Read the article

  • What is the fastest way to resize a large partition?

    - by Jook
    Due to a new HDD-Configuration I am currently handling larger backup/resize tasks with partitions between around 900MB, wich are 70-90% full. some background: First thing I've noticed was, that the Acronis-WesternDigital TrueImage was extremly slow while running it under Windows 7, even though on high priority. To create a normal backup for 650gb of data (900gb partition), it would have taken 3 days! The same task done with the boot-cd version of this acronis version took about 2 hours (SATA3 copy from one disk to another, both around 110MB/s). Now, after I have done all my backups, I've wanted to remove some obsolete partitions and resize the leftovers to full hdd size. Of course, usually this takes quite some time - in this case for this 900gb partition, to extend it to 931 (30gb+ from front, 1gb+ from end), it will take around 6 hours (using gparted)! Had I new that erlier, I would have just restored the image. But no - first it showed a reasonable time of 1:45h and 0 of 1 operations, but after finishing 1:45h it started again, only this time with 4h to go, still 0 of 1 operations, but now it was copying instead of moving. Question: However, why has it to be this slow to resize a partition? I am asking for a good explanaition. This has bugged me, since I started partitioning - why does it require to copy all the data around, can't it just stay in place?!

    Read the article

  • Dell PERC 5 - RAID-10 keeps rebuilding drive 2 every day

    - by raid question
    I have a Dell PowerEdge 2950 with this card: RAID bus controller [0104]: Dell PowerEdge Expandable RAID controller 5 [1028:0015] and six disks in a RAID-10. I replaced drive 2, because it didn't show up, and then it started to rebuild itself: root@backup01:~# megaraidsas-status -- Arrays informations -- -- ID | Type | Size | Status a0d0 | RAID 10 | 5587GiB | DEGRADED -- Disks informations -- ID | Model | Status | Warnings a0e8s0 | ATA ST2000DM001-9YN1 1863GiB | online | errs: media:0 other:5393 a0e8s1 | ATA ST2000DM001-9YN1 1863GiB | online | errs: media:0 other:5394 a0e8s2 | ATA ST2000DM001-1E61 1863GiB | rebuild | errs: media:0 other:99 a0e8s3 | ATA ST2000DM001-9YN1 1863GiB | online | errs: media:0 other:5393 a0e8s4 | ATA ST2000DM001-9YN1 1863GiB | online | errs: media:0 other:5393 a0e8s5 | ATA ST2000DM001-9YN1 1863GiB | online | errs: media:0 other:5393 The rebuild finishes, then the virtual drive becomes optimal, and drive 2 goes online. Then once a day, drive 2 acts like it's been removed, and the rebuild starts all over again. How do I make this once a day rebuild stop? Event Description: Removed: PD 02(e1/s2) Event Description: Removed: PD 02(e1/s2) Info: enclPd=08, scsiType=0, portMap=04, sasAddr=1221000002000000,0000000000000000 Event Description: State change on VD 00/0 from OPTIMAL(3) to DEGRADED(2) Event Description: VD 00/0 is now DEGRADED1 Event Description: State change on PD 02(e1/s2) from ONLINE(18) to FAILED(11) Event Description: State change on PD 02(e1/s2) from FAILED(11) to UNCONFIGURED_BAD(1) Event Description: Background Initialization failed on VD 00/0 Event Description: Inserted: PD 02(e1/s2) Event Description: Inserted: PD 02(e1/s2) Info: enclPd=08, scsiType=0, portMap=04, sasAddr=1221000002000000,0000000000000000 Event Description: PD 02(e1/s2) is not a certified drive Event Description: State change on PD 02(e1/s2) Event Description: State change on PD 02(e1/s2) from UNCONFIGURED_GOOD(0) to OFFLINE(10) from UNCONFIGURED_BAD(1) to UNCONFIGURED_GOOD(0) Event Description: Rebuild automatically started on PD 02(e1/s2) Event Description: State change on PD 02(e1/s2) from OFFLINE(10) to REBUILD(14)

    Read the article

  • Distorted Sound

    - by BCable
    I have my laptop hooked up to my receiver for sound output. I hear a hissing/crackling background sound that is really loud and hard to just ignore (but possible). When my 360 is connected, the sound comes out perfect, so it's just with this laptop. Previously, I thought it was just my laptop and just submissively just let it slide. I just bought a brand new laptop though and it's doing the same thing. I have found out more information now that I know it's not my laptop. I have used this laptop in similar environments where it worked just fine (different speakers). I have bought a new cable to connect to my receiver and it did nothing (headphone jack to RCA). I tried different ports on the receiver (Video 1-3) and it always happens. I have discovered that the sound goes away if I unplug my laptop (so it's running on battery). Because of the last one, I tried plugging my laptop into a different outlet across the room and it's STILL doing it. Doesn't matter if I boot to Linux or Windows, yet my phone (Android G1) doesn't cause this sound using the exact same cable. Any ideas? I'm out of them! Thanks!

    Read the article

  • Passenger connection reset by peer issue

    - by user887372
    I am new to ruby on rails. I am using passenger 3.0.17 to deploy my ruby 3.2.6 project. My project is working fine but i got 500 internal error when i try to upload files on server. I checked my passenger log and found: [ pid=20654 thr=140394143790848 file=ext/nginx/HelperAgent.cpp:933 time=2012-11-01 09:29:57.82 ]: Uncaught exception in PassengerServer client thread: exception: write() failed: Connection reset by peer (104) backtrace: in 'void Client::forwardResponse(Passenger::SessionPtr&, Passenger::FileDescriptor&, const Passenger::AnalyticsLogPtr&)' (HelperAgent.cpp:705) in 'void Client::handleRequest(Passenger::FileDescriptor&)' (HelperAgent.cpp:859) in 'void Client::threadMain()' (HelperAgent.cpp:952) 2012/11/01 09:29:27 [crit] 20691#0: *431 mkdir() "/tmp/passenger-standalone.20640/proxy_temp/2" failed (2: No such file or directory) while reading upstream, client: 124.172.71.55, server: _, request: "GET /assets/jquery.js?body=1 HTTP/1.1", upstream: "passenger:unix:/passenger_helper_server:", host: "test.com:3000", referrer: "http://test.com:3000/" 2012/11/01 09:29:33 [crit] 20691#0: *435 mkdir() "/tmp/passenger-standalone.20640/proxy_temp/3" failed (2: No such file or directory) while reading upstream, client: 124.172.71.55, server: _, request: "GET /assets/background.png HTTP/1.1", upstream: "passenger:unix:/passenger_helper_server:", host: "test.com:3000", referrer: "http://test.com:3000/" [ pid=20654 thr=140394115462912 file=ext/nginx/HelperAgent.cpp:933 time=2012-11-01 09:29:33.543 ]: Uncaught exception in PassengerServer client thread: exception: write() failed: Connection reset by peer (104) backtrace: in 'void Client::forwardResponse(Passenger::SessionPtr&, Passenger::FileDescriptor&, const Passenger::AnalyticsLogPtr&)' (HelperAgent.cpp:705) in 'void Client::handleRequest(Passenger::FileDescriptor&)' (HelperAgent.cpp:859) in 'void Client::threadMain()' (HelperAgent.cpp:952) Please guide me regarding the issue. I am unable to find the reason of this peer reset and failied mkdir(). Thanks in advance

    Read the article

  • I accidentally hijacked my localhost

    - by Zach L
    Opening localhost in the browser is pointing a local webpage (examplePage) after playing with some config files a while back, and I can't figure out how to restore the default behavior. Background: I have XAMPP installed on my Windows 7 machine, and a webpage at c:/xampp/htdocs/examplePage. A couple weeks ago, I was on a mission to get sites root-relative urls (/resource) to work, so I played around with a bunch of apache/conf files, including httpd.conf and httpd-vhosts.conf and also was messing with the Windows hosts file. I gave up at some point, didn't document exactly what I did, and have since probably forgotten some of what I did. Many of my changes stemmed from suggestions in this StackOverflow post What I've Tried I commented out my additions to the hosts file I turned off XAMPP (thus hopefully negating any apache config file effect) I reverted to my original DocumentRoot in httpd.conf anyway (xampp/htdocs) localhost still displays examplePage. Even with xampp turned on (my reverted DocmentRootisn't taking effect) Does anyone know what I may have done and how I can fix it? Update : Its been resolved, thank everyone so much in taskmanager, theres a couple instances of httpd.exe (Apache HTTP Server). I ended these, and opened XAMPP, restarting apache. all references to examplePage in my .conf files that I could find had been commented out or removed. I imagine that the old versions were still in effect for some reason, and manually ending the Apache processes fixed this. As a point of interest, Its still a mystery why those processes were running - I cannot reproduce that situation. I must've stumbled upon a XAMPP bug of some sort.

    Read the article

  • Windows server 2008, Dns. I'm confused

    - by Dejan.S
    Hi. I recently setup a window server 2008 server at work. Keep in mind I have never worked with it before:). Background story is I try to host a couple of sites on the server through iis7, I got domains (currently hosted at other hosters for the moment). I want to point the domain NS to my server on all of them. I have read how to setup a DNS on the server, so far so good. now my dns is companyname.com in the server manager I got DNS / companyname.com in there I got ns.comanyname.com, in there I got Host(A) with the server ip now this is where I get confused about how things work with DNS, NS & Host(A). I dont know how to assign(so to speak) the Host(A) to one of my webapps hosted on the iis7, because that is the pointer right?. To leave an example to work with lets say, Hosted.com is hosted on my iis7, on port 81. You don't understand how great full I would be if somebody could explain this confusion. EDIT: Do I need to create a DNS for every site hosted on my server? Or just make a A Host/Record? Thanks guys

    Read the article

  • IIS 7.5 FTP Service crashes after installation of Advanced Logging 1.0 Module

    - by Jeremy
    I've recently been tasked with setting up two new productions servers for an ASP.Net application. The servers sit behind a F5 Load Balancer, which in turn forwards the end users IP address forward via the standard X_Forwarded_For HTTP Header. All of the reading that I have done suggests that I need to install the IIS Advanced Logging Module in order to take advantage of the X_Forwarded_For HTTP Header. Some quick background: Both of the web servers are Windows 2008 R2 Standard (x64), with IIS 7.5 installed and configured. The FTP Role has also been installed, configured and is operational. The Issue After installing the IIS Advanced Logging module via the Web Platform Installer, I noticed the following Error in the Event Viewer: The FTP Service encountered an error trying to read configuration data from file \?\C:\Windows\system32\inetsrv\config\applicationHost.config, line number 374. The error message is: Unrecognized element 'advancedLogging' Trying to connect over FTP to either of the web servers results in a 530. I've spent 2 hours scouring Google trying to find a solution, short of uninstalling the Advanced Logging Module. As far as I can tell, there is no way to turn off Advanced Logging on a site per site basis. Help would be appreciated.

    Read the article

  • sudoer scheme for another web developer that retains my future control of a virtual server?

    - by Tchalvak
    Background: Virtual Private Server I have a virtual private server that I'm looking to host multiple websites on, and provide access to another web developer. I don't care about putting too many constraints on him, though I wouldn't mind isolating the site that he'll be developing from other sites on the server that I will develop. The problem: retain control Mainly what I want is to make sure that I retain control over the server in the future. I want to reserve the ability to create/promote/demote and other administrative functions that don't deal with web software. If I make him an admin, he can sudo su - and become root and remove root control from me, for example. I need him not to be able to: take away other admin permissions change the root password have control over other security/administrative functions I would like him to still be able to: install software (through apt-get) restart apache access mysql configure mysql/apache reboot edit web development configuration type files in /etc/ Other Standard Setups would be happily considered I've never really set up a good sudoers file, so simple example setups would be very useful, even if they're only somewhat similar to the settings that I'm hoping for above. Edit: I have not yet finalized permissions, standard, useful sudo setups are certainly an option, the lists above are more what I'm hoping I can do, I don't know that that setup can be done.

    Read the article

  • Apache2: 400 Bad Reqeust with Rewrite Rules, nothing in error log?

    - by neezer
    This is driving me nuts. Background: I'm using the built-in Apache2 & PHP that comes with Mac OS X 10.6 I have a vhost setup as follows: NameVirtualHost *:81 <Directory "/Users/neezer/Sites/"> Options Indexes MultiViews AllowOverride None Order allow,deny Allow from all </Directory> <VirtualHost *:81> ServerName lobster.dev ServerAlias *.lobster.dev DocumentRoot /Users/neezer/Sites/lobster/www RewriteEngine On RewriteCond $1 !^(index\.php|resources|robots\.txt) RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ index.php/$1 [L,QSA] LogLevel debug ErrorLog /private/var/log/apache2/lobster_error </VirtualHost> This is in /private/etc/apache2/users/neezer.conf. My code in the lobster project is PHP with the CodeIgniter framework. Trying to load http://lobster.dev:81/ gives me: 400 Bad Request Normally, I'd go check my logs to see what caused it, yet my logs are empty! I looked in both /private/var/log/apache2/error_log and /private/var/log/apache2/lobster_error, and neither records ANY message relating to the 400. I have LogLevel set to debug in /private/etc/apache2/http.conf. Removing the rewrite rules gets rid of the error, but these same rules work on my MAMP host. I've double-checked and rewrite_module is loaded in my default Apache installation. My http.conf can be found here: https://gist.github.com/1057091 What gives? Let me know if you need any additional info. NOTE: I do NOT want to add the rewrite rules to .htaccess in the project directory (it's checked into a git repo and I don't want to touch it).

    Read the article

  • What is the latest on Microsoft Expressoin Studio licensing?

    - by DanM
    In the past, there's been an issue with Microsoft not allowing you to deactivate an Expression Studio key. Basically, you get two keys per license. If you assign both keys (say one to a desktop and one to a laptop), then you upgrade to a new machine (say you replace your laptop or upgrade some of the hardware), you have to buy a new copy of Expression Studio ($600 for Ultimate). This seems ludicrous to me, and I'm wondering if anyone knows if this policy is still in place. I can't seem to find a EULA online anywhere, so I don't know where to find this information. I know my laptop is due for replacement soon, and I want to know if I'm going to have to sink $600 into a software product I already purchased. For background, please refer to this thread on the Microsoft Expression forums: http://social.expression.microsoft.com/Forums/en-US/general/thread/da5587bc-b098-4c6a-9a56-af3608d940d0 Note that this thread is locked. Microsoft doesn't seem to want people to discuss this. This is one reason I'm posting here rather than on that site.

    Read the article

  • Balancing internal services using a Cisco CSS 11501

    - by Ladadadada
    First, the background to the problem: I have a Cisco CSS11501 that I am using to load balance a few web servers. These web servers have two network interfaces, one internal and one external and we are sending the requests to the internal interface. We have the CSS configured to do NAT because our webservers need to see the client's IP address. Because the TCP packets hit the webservers with a source address on the Internet, the webserver tries to send the packet back to the client over the external interface and not through the load balancer. In order to stop these requests being sent back out to the Internet via the external interface, we added a routing rule on these boxes so that all traffic with a source address on the internet will use the load balancer as the gateway. This part works fine. What I would also like to to is use the CSS as a load balancer for internal services such as our MySQL slaves. When I do this, I run into a similar problem; the TCP connection goes from the web server to the load balancer and then from the load balancer to the MySQL slave but the CSS spoofs a source address of the original webserver. The MySQL slave then tries to send the response directly to the webserver via the internal network and not via the load balancer. The ideal solution would be to tell the CSS not to do source address spoofing on the internal network and only do it for requests originating on the Internet. Is this possible ? Failing that, is there a way of directing the load balanced traffic back through the load balancer while keeping the other traffic (say SSH) purely on the internal network ? Is there another way of using the CSS11501 to load balance internal services ?

    Read the article

  • Cron won't use msmtp to send emails in case of failed cronjob

    - by Glister
    I'm trying to configure a machine so that it will send me an email if one of the cronjobs output something in case of an error. I'm using Debian Wheezy. Cron is working normally (without the email functionality). msmtp is installed and configured. Have already symlinked /usr/{bin|sbin}/sendmail to /usr/bin/msmtp. I can send email by using: echo "test" | mail -s "subject" [email protected] or by executing: echo "test" | /usr/sbin/sendmail Without the symlink (/usr/sbin/sendmail) cron will tell me that: (CRON) info (No MTA installed, discarding output) With the symlinks I get: (root) MAIL (mailed 1 byte of output; but got status 0x004e, #012) Can you suggest how to config the cron/msmtp pair? Thanks! EDIT: Note: I've written "msmtpd" by mistake. Its not a daemon but rather an SMTP client named just "msmtp" (without the "d" ending). It is executed on demand and it is not running in the background all the time. When I try to send an email by using msmtp like that it works: echo "test" | msmtp [email protected] On the far side, in the logs of the SMTP server I read: Nov 2 09:26:10 S01 postfix/smtpd[12728]: connect from unknown[CLIENT_IP] Nov 2 09:26:12 S01 postfix/smtpd[12728]: 532301C318: client=unknown[CLIENT_IP], sasl_method=CRAM-MD5, [email protected] Nov 2 09:26:12 S01 postfix/cleanup[12733]: 532301C318: message-id=<> Nov 2 09:26:12 S01 postfix/qmgr[2404]: 532301C318: from=<[email protected]>, size=191, nrcpt=1 (queue active) Nov 2 09:26:12 S01 postfix/local[12734]: 532301C318: to=<[email protected]>, orig_to=<[email protected]>, relay=local, delay=0.62, delays=0.59/0.01/0/0.03, dsn=2.0.0, status=sent (delivered to command: IFS=' ' && exec /usr/bin/procmail -f- || exit 75 #1001) Nov 2 09:26:12 S01 postfix/qmgr[2404]: 532301C318: removed Nov 2 09:26:13 S01 postfix/smtpd[12728]: disconnect from unknown[CLIENT_IP] And the Email is delivered to the target user. So it looks like that the msmtp client is working properly. It has to be something in the cron/msmtp integration, but I have no clue what that thing might be. Can you help me?

    Read the article

  • Can I configure Thunderbird 3 to refresh the folder list for an Exchange IMAP account?

    - by Howiecamp
    Background: When used as an IMAP client against Gmail, Thunderbird 3 (may be the case in v2 also, not sure) will refresh it's list of folders (the folders correspond to Gmail labels) when you do "Download/Sync Now..." or restart the Thunderbird client. Any new folders (labels) created in Gmail will sync to the client and any folders moved/changed/deleted folders in Gmail will move/change/delete on the client as well. (Note: Thunderbird has the concept of "subscribing" to IMAP folders (assumingly allowing you to determine which folders you want, rather than bringing all of them down and dragging loads of data across the wire). When used against Gmail, Thunderbird appears to automatically subscribe to all folders (including when folders are newly created in Gmail), so this might be why the refresh is happening properly.) This behavior is what I want with Exchange. When using Thunderbird with Exchange (2007), the folder list doesn't refresh when folders are added/changed/deleted on the server and/or from a different mail client. When I look at the subscription options, some are checked and some are not (not sure why Thunderbird picked some and not others). And when I add new folders on the server and/or from another client, they never even appear in Thunderbird's list of folders, preventing me from subscribing to them.

    Read the article

  • web application or wep portal

    - by klo
    as title said differences between those 2. I read all the definition and some articles, but I need information about some other aspects. Here is the thing. We want to build a web site that will contain: site, database, uploads, numerous background services that would have to collect information from uploads and from some other sites, parse them etc...I doubt that there are portlets that fits our specific need so we will have to make them our self. So, questions: 1. Deployment ( and difference in cost if possible), is deploying portals much more easier then web app ( java or .net) 2. Server load. Does portal consume much of server power ( and can you strip portal of thing that you do not use) 3. Implementation and developing of portlets. Can u make all the things that you could have done in java or .net? 4. General thoughts of when to use portals and when classic web app. Tnx all in advence...

    Read the article

  • Reducing video mode switching during Linux boot

    - by Zack
    When I boot up my desktop computer, which only has Linux on it, the video mode and/or console font gets switched four times: When GRUB starts, it switches from 80x25 text to a graphical mode so it can draw a pretty background behind its menu; GRUB then goes back to 80x25 text after I pick something from the menu; When the KMS driver for my video card loads, it switches to a much higher-resolution text mode (I don't know if this is a hardware text mode or not); Finally X starts and it goes graphics and stays that way. I think this last switch does not change the resolution of the video mode, only the graphicalness. I'd like to get rid of as many of these mode switches as possible. Ideally, when GRUB takes over from the BIOS it would go directly to the same high-resolution text mode that the KMS driver selects, and the display would stay in that mode till X starts and brings up graphics. I am under the impression that this is possible by mucking with the kernel command line and/or the GRUB console module load parameters, but I don't know the details. GRUB 1.98+20100706, kernel 2.6.32.15 using Nouveau video drivers. Distro is Debian unstable. Please no answers that involve recompiling anything or cobbling together bleeding-edge kernel/driver combinations, I don't care enough about this to go to that much trouble. EDIT: Tobu suggests setting GRUB_GFXMODE to the full pixel resolution of the monitor, and GRUB_GFXPAYLOAD_LINUX=keep to avoid the mode switch after the menu goes away. This does part of what I want, but winds up being worse overall. There's no mode switch after the menu, but there's still a painfully-slow screen repaint (I should probably just give up on GRUB's gfxmode, it's waaaay too slow at 1920x1200). More seriously, there's now a double mode switch when nouveaufb loads, along with fun-looking error messages in dmesg [ 5.923798] [drm] nouveau 0000:02:00.0: allocated 1920x1200 fb: 0x40250000, bo ffff8801ba5f4600 [ 5.923802] fb: conflicting fb hw usage nouveaufb vs EFI VGA - removing generic driver [ 5.923821] [drm] nouveau 0000:02:00.0: PFIFO_INTR 0x00000010 - Ch 1 ("PFIFO_INTR" message repeats 400+ times) [ 5.925609] Console: switching to colour dummy device 80x25 [ 5.925802] Console: switching to colour frame buffer device 240x75

    Read the article

  • How to install Red Hat Enterprise Linux on Apple Macbook Pro MacBookPro4,1

    - by Todd V. Rovito
    I have a one year old Mac Book Pro that I am trying to get RHEL 5.4 installed on via bootcamp. No matter what I do I can't get the installer to boot. I have tried multiple DVD's and even verified the install works on a new Mac Book Pro. Most of the time the installer simply locks up. I usually use Linux text with all-generic-ide on the boot line. I removed the ide parameter and I just used linux text. The results I get are that a bunch of kernel messages appear then the background turns blue and a thin text box pops up saying its loading ata..... something it disappears too fast for me to read. Then the machine freezes. I pressed the alt function keys to see if I could look at the system log, here is what it says: Alt-f3 says "trying to mount CD device hda" Alt-f4 says status error: hda: lastFailedSense Hda: Failed opcode was: unknown Hda: Lost interrupt Hda: Drive not ready for command Ide-cd: command 0x3 timed out Above this junk it looks like it found the partition because it knew it was 20 GB and listed as /dev/sda3. I think it has something to do with the CD drive, is that possible? Thanks again for the support. PS I posted in the apple support forums ( Apple.com Support Discussions Boot Camp Installation and Storage) and didn't get an answer.

    Read the article

  • How do I install git/git-svn on RHEL5 with a custom perl install?

    - by kbosak
    I've had nothing but trouble trying to install Git on RHEL5. First I tried from source, but ran into several issues with installing the docs. There appeared to be missing libs and such for parsing xml that I couldn't figure out how to get installed and recognized. Then I tried using the EPEL yum repository and was able to install git and its docs but now git-svn is not working. It complains about not finding the perl modules Git.pm and SVN/Core.pm. When I set the GITPERLLIB environment variable to the location of those libs it seg faults. Some background: RHEL5 came with perl 5.8.8, but we wanted to use 5.10 so I installed that from source (to a custom location). Someone then symlinked the system perl binary to this newer version of Perl to make sure nobody uses the wrong version. Each developer also has their own build of Perl. So I'm wondering what's the best way to install Git on this system and have both the docs and git-svn working correctly for each user. Unfortunately I'm a developer and not as good with system administration so take it easy on me.

    Read the article

  • Is StoreJet Transcend (0x2329) an Advanced Format drive?

    - by Graham Perrin
    I use a 640 GB StoreJet Transcend (0x2329) with ZEVO Community Edition 1.1.1 on OS X 10.8.2. Question Is this drive Advanced Format? Background I submitted a request for technical support to Transcend but the first response was gibberish so I don't expect a reasonable follow-up. Models at http://www.transcend-info.com/Products/CatList.asp?LangNo=0&ModNo=293 are similar but different sizes (not 640 GB). Mine is probably 25M2 (TS640GSJ25M2): Unless I'm missing something, nothing currently in the Transcend support area tells me whether the drive is Advanced Format. From System Information in OS X 10.8.2: StoreJet Transcend: Capacity: 640.14 GB (640,135,028,736 bytes) Removable Media: Yes Detachable Drive: Yes BSD Name: disk3 Product ID: 0x2329 Vendor ID: 0x152d (JMicron Technology Corp.) Version: 0.00 Serial Number: 322549FBA004 Speed: Up to 480 Mb/sec Manufacturer: JMicron History for the ZFS pool shows creation in March 2012 –  macbookpro08-centrim:~ gjp22$ zpool history zhandy | grep create 2012-03-14.17:29:37 zpool create -f -O compression=off -O copies=1 -O casesensitivity=insensitive -O snapdir=visible zhandy /dev/dsk/GPTE_1928482A-7FE4-482D-B692-3EC6B03159BA 2012-06-22.15:51:16 zfs create zhandy/Pocket Time Machine At that time I almost certainly used ZEVO Setup Assistant to create the pool. macbookpro08-centrim:~ gjp22$ zpool get ashift zhandy NAME PROPERTY VALUE SOURCE zhandy ashift 0 default If I discover that the drive is Advanced Format, a different ashift value will be appropriate.

    Read the article

  • External USB HD with -optional- mains?

    - by Stephen
    Hi, I'm Christmas-present-buying, and I'd appreciate recommendations for a USB HD with an optional mains power input. I've hunted, but can't find all the information I want (partially due to sketchy product specifications). Background: This is for a digital TV which I do not own, and so I'd like to get it correct first time. The TV has a USB port to allow recording straight to disk, but the manuals don't say how much power can be drawn through the USB port. The manual's instructions state, possibly generically, to plug the drive in before connecting to the TV. Ideally I'd like a small (2.5"?) drive which can draw power over USB, with an mains power input if it turns out the USB port on the TV doesn't offer enough juice. The ideal is to use one cable, two max. A powered USB hub would introduce too much clutter. I've spotted that the LaCie Petit drives have what appears to be an additional power input, but I'm not even sure from the specs what that is. And the device doesn't ship with a mains adapter. Suggestions?

    Read the article

  • Apple / Mac OS X - Is there a Package Manager like Linux

    - by Walter White
    I am a Linux/UNIX user and love the package management that comes with it. For the most part, I like Ubuntu, but just like anything else, it is the minor things that you live with daily that would be nice if they just worked. My main issue is my wacom tablet while it works, every time there is an OS update, I have to rebuild the wacom driver. The other slightly annoying issue is, my ATI video card is not fully supported. When I use the HDMI out, the sound doesn't go through it, and the screen is not entirely used. I would happily get an Apple if it had a similar package management system like Ubuntu, Gentoo, or other Linux distribution. This takes the work out of getting the latest enhancements or fixes. It also takes all the guess work out about what you need to get something to work. I just want to use my computer, not administer it. Aside from Apple applications, if I wanted to install the GIMP on an apple, would it go and fetch ufraw if I wanted support for that and whatever other dependencies GIMP has? If I want Netbeans installed, will it go and get a JDK and maven if I want that? If not, is there something in the works? I know I don't update my applications that frequently, but that is mainly because I'm not aware of the updates. The updates all happen in the background. Walter

    Read the article

  • How can I unregister a service with dns-sd?

    - by Roman
    I am trying to use "dns-sd" command line tool on my Windows 7 machine. I can already do something. For example I can register a service using "dns-sd -R ...". I also can browser (see) registered services using "dns-sd -B ...". What I still miss, is how to unregister a service. At the moment when I type "dns-sd -R ..." the dns-sd does not return me to the command prompt. To return to the command prompt I need to press Ctrl-C. And the service stays registered till I press Ctrl-C. What I want is to run "dns-sd -R ..." in the background regime and then I would like to have a possibility to unregister a service from the command line. One more thing which I do not understand yet is what "to look up a service" means. In my picture it should be sufficient to register a service, to see it and then to unregister it. But apparently I need to look up a service. What does it mean and why I need to do it?

    Read the article

  • Bash script dosn't open in terminal on reboot

    - by twigg
    Quick overview, I have created a script that reboots the laptop after x amount of time and x amount of cycles. I have added the script to the start-up applications and the script does seem to be running in the background but never opens a terminal Window. Am I missing something? Adding Code (this is saved in a file called countdown.sh) #!/bin/bash # check if passed.txt exists if it does, send to soak test if [ -f passed.txt ]; then echo reboot has passed $nol cycles sleep 5; echo Starting soak tests sleep 5; rm testlog.txt; rm passed.txt; phoronix-test-suite run quick-test exit 0; fi # check if file testlog.txt exists if not create it if [ ! -f testlog.txt ]; then echo >> testlog.txt; fi # read reboot file to see how many loops have been completed exec < testlog.txt nol=0 while read line do nol=`expr $nol + 1` done # start the countdown, x is time limit let x=10; while [ $x -gt 0 ]; do clear; figlet "Rebooting in..."; figlet $x; let x-=1; sleep 1; done; echo reboot success $nol >> testlog.txt; shutdown -r now; # set how many times the script should shutdown the laptop reboot_count=1 # if number of reboots matches nol's then stop the script # create a new text file called passed.txt if [ "$nol" == "$reboot_count" ]; then echo reboot passed $nol cycles >> passed.txt; fi

    Read the article

  • Triple-monitor set-up (2 unique, 1 cloned): Can a VGA splitter be used on one output of a dual-head

    - by stakx
    Background: I'm currently researching hardware components for some kind of information terminal we're building. This application of ours makes use of three output screens: (1) A touch screen where all user input is made; (2) A regular LCD monitor where the requested information is being displayed; and (3) A projector which displays exactly the same signal as screen (2) does. (All screens will run at the same resolution of 1024x768 btw.) Now I figured that using a dual-head video card would be sufficient, let's say a Matrox P690 low-profile PCI card. This would involve having a Y cable connected to the graphics card itself, then two DVI-to-VGA adapters at each end of the Y cable, and then having a VGA splitter on one of the VGA outputs. The following shows the setup in question: 0--1---------2-> VGA (DSUB-15) \ \ ----2-3---------> VGA (DSUB-15) \ \ -----------------> VGA (DSUB-15) 0: graphics card (LFH60 jack) 1: LFH60 to DVI-I dual monitor Y cable 2: DVI-to-VGA adapters 3: VGA splitter cable Question(s): Will this work? I'm particularly concerned about the following points: Can a low-profile PCI video card output a signal which is strong enough for three monitors (even if it's a dual-head card)? Does the combination of so many adapters and splitter cables work? (The LFH-to-DVI cable comes with the video card) Will the VGA splitter cable degrade the signal on the output screen & projector significantly? (If so, would a USB-powered splitter cable remedy this problem?) I can't possibly expect anyone to answer all those questions, but any input is appreciated.

    Read the article

< Previous Page | 328 329 330 331 332 333 334 335 336 337 338 339  | Next Page >