Search Results

Search found 20051 results on 803 pages for 'ribbon control'.

Page 522/803 | < Previous Page | 518 519 520 521 522 523 524 525 526 527 528 529  | Next Page >

  • HTTP resource caching / fetching

    - by Bobby Jack
    I'm trying to optimise a page, and I'm seeing some strange behaviour. Each time I click on a link to the page, all resources are fetched from the server, responding with 200s. However, when I refresh the page (specifically, F5 in Firefox), all resources return a 304 and - of course - the page loads much faster as a result. The main page returns a 200 in both cases. In the refresh case, If-Modified-Since headers are sent with the requests to the resources. However, in the 'clicking a link' case, they are not. What's the reason for that, and can I control it?

    Read the article

  • YSLow says certain CSS are not gzipped

    - by rhand
    YSlow keeps on telling me files like http://www.example.com/wp-content/plugins/q-and-a/css/q-a-plus.css?ver=1.0.6.2 are not gzipped while the gzip test tool at Feed the Bot mentions I am all good: Compressed? Yes Compression type gzip Page size (Bytes) 32,493 Compressed size (Bytes) -1 Saving (Bytes) 32,494 Compression % 100% I added this to my .htaccess: # Gzip <ifModule mod_gzip.c> mod_gzip_on Yes mod_gzip_dechunk Yes mod_gzip_item_include file .(html?|txt|css|js|php|pl)$ mod_gzip_item_include handler ^cgi-script$ mod_gzip_item_include mime ^text/.* mod_gzip_item_include mime ^application/x-javascript.* mod_gzip_item_exclude mime ^image/.* mod_gzip_item_exclude rspheader ^Content-Encoding:.*gzip.* </ifModule> #Deflate <ifmodule mod_deflate.c> AddOutputFilterByType DEFLATE text/text text/html text/plain text/xml text/css application/x-javascript application/javascript </ifmodule> The header for the file mentioned states: CF-Cache-Status MISS CF-RAY 13945df90a9a0c1d-AMS Cache-Control public, max-age=2592000 Connection keep-alive Content-Encoding gzip Content-Type application/javascript Date Thu, 12 Jun 2014 07:34:38 GMT Expires Sat, 12 Jul 2014 07:34:38 GMT Last-Modified Thu, 21 Feb 2013 01:29:18 GMT Server cloudflare-nginx Transfer-Encoding chunked Vary Accept-Encoding Any ideas what I am missing here?

    Read the article

  • What's a way to implement a flexible buff/debuff system?

    - by gkimsey
    Overview: Lots of games which RPG-like statistics allow for character "buffs", ranging from simple "Deal 25% extra damage" to more complicated things like "Deal 15 damage back to attackers when hit." The specifics of each type of buff aren't really relevant. I'm looking for a (presumably object-oriented) way to handle arbitrary buffs. Details: In my particular case, I have multiple characters in a turn-based battle environment, so I envisioned buffs being tied to events like "OnTurnStart", "OnReceiveDamage", etc. Perhaps each buff is a subclass of a main Buff abstract class, where only the relevant events are overloaded. Then each character could have a vector of buffs currently applied. Does this solution make sense? I can certainly see dozens of event types being necessary, it feels like making a new subclass for each buff is overkill, and it doesn't seem to allow for any buff "interactions". That is, if I wanted to implement a cap on damage boosts so that even if you had 10 different buffs which all give 25% extra damage, you only do 100% extra instead of 250% extra. And there's more complicated situations that ideally I could control. I'm sure everyone can come up with examples of how more sophisticated buffs can potentially interact with each other in a way that as a game developer I may not want. As a relatively inexperienced C++ programmer (I generally have used C in embedded systems), I feel like my solution is simplistic and probably doesn't take full advantage of the object-oriented language. Thoughts? Has anyone here designed a fairly robust buff system before?

    Read the article

  • How to tell your boss that he's a bad programmer? [closed]

    - by Doe
    Possible Duplicate: How to tell your boss that his programming style is really bad? There was a question about the boss having a bad programming style (weird booleans, empty loops, etc.) Having a bad/weird style does not imply being a bad programmer, but my situation is different. My boss outputs some really nasty code for the project, on which we are working together (just two of us). Examples: functions that span over several screens (big screens - 1900 x 1200) Deeply nested Conditional and Loop statements (up to 10 levels!!) Too much static variables, singletons, and both (singleton class with all the methods and members also static) Sometimes the code committed to the version control system does not even compile! Copy-Paste code instead of separating it into an independent function. Fail all the deadlines. "This's [C#|Java|Python] it shouldn't be efficient, that's why we loop all over the haystack to find the needle." "This's C/C++, it's fast enough to loop all over the haystack to find the needle." There is much more to mention... But the worst is that I have to redo much of the stuff he does, my code, which I try to keep clean is often polluted with above-mentioned atrocities. He's reaching 30 soon, so all his skills are established, and I don't even know if it's possible to change something. I like the project, but sometimes I just want to quit...

    Read the article

  • FTP error when doing file transfer

    - by Ernie
    I'm running vsftpd version 3.0.2 over FTPeS, and I'm having a bit of trouble with file transfers. It seems to work fine when I'm on the LAN, but not from an external IP address. I have the control port and data ports open on my server's software firewall and my router's firewall. When I'm using the service from an external IP address, it seems like sometimes a file transfer will complete, but it times out and I always get the client error: "426 Failure writing network stream". I've tried several clients. I'm thinking there is some sort of data sabotage either at the router or some server policy; maybe because I'm using passive ftp? Suggestions?

    Read the article

  • Uninstall SQL Server 2005 Express after Demoting the DC

    - by Walter Aman
    A Windows Server 2003 SP2 hosting a now orphan installation of SQL 2005 Workgroup was pressed into service as a DC in a disaster recovery scenario. It has since been demoted. The server also hosts legacy apps for which we lack reinstallation resources; thus our desire to preserve it as close to intact as possible while removing the orphaned roles. All efforts to remove SQL 2005 thru Control Panel and ARPWrapper /remove fail with error 29528. Should I abandon this and leave the orphan SQL dormant, or is it reasonable to remove it post-demote?

    Read the article

  • Remote desktop to multiple windows machines on a LAN with dynamic IP

    - by kevyn
    Is it possible to use remote desktop to connect to multiple computers inside a network that has a dynamic IP address? I use a netgear WPN824 router which has dyndns onboard - but I currently use No-IP to control a single computer that I use most frequently. Every so often I need to get onto a couple of other computers in the network, but don't know how to go about this without logging onto one computer, and then starting another RDC session from that machine. What I would like to be able to do is connect to my router, and be able to see a list of connected devices, and then choose which to remote desktop onto. - I appreciate this probably is not be possible, but any other suggestions are welcome!

    Read the article

  • Building own kernel on ubuntu

    - by chris
    Hi, I'm trying to build my own kernel, as I want to write a kernel program which I need to compile into the kernel. So what did I do? Download from kernel.org, extract, do the make menuconfig and configure everything as needed, do a make, do a make modules_install, do a make install and finally do a update-grub. Result: It doesn't boot at all.... Now I had a look here and it describes a different way of compiling a kernel. Could this be the reason whz my way did not work? Or does anyone else have an idea why my kernel doesn't work? ######## Edit Great answer, ty. Oli. But I tried it the old fashioned way, and after one hour of compiling I got this message: install -p -o root -g root -m 644 ./debian/templates.master /usr/src/linux-2.6.37.3/debian/linux-image-2.6.37.3meinsmeins/DEBIAN/templates dpkg-gencontrol -DArchitecture=i386 -isp \ -plinux-image-2.6.37.3meinsmeins -P/usr/src/linux-2.6.37.3/debian/linux-image-2.6.37.3meinsmeins/ dpkg-gencontrol: error: package linux-image-2.6.37.3meinsmeins not in control info make[2]: *** [debian/stamp/binary/linux-image-2.6.37.3meinsmeins] Error 255 make[2]: Leaving directory `/usr/src/linux-2.6.37.3' make[1]: *** [debian/stamp/binary/pre-linux-image-2.6.37.3meinsmeins] Error 2 make[1]: Leaving directory `/usr/src/linux-2.6.37.3' make: *** [kernel-image] Error 2

    Read the article

  • Will I have internet connection issues next day, if I unplug router at night?

    - by headskracher77
    I did this regularly a few years ago and used to feel like I was 'being punished' by the Interent svc provider for disabling access to my computer, because trying to re-connect the next day became a constant pain. I have the same linksys router, a comcast modem, and hi-speed broadband through their LAN. Question: who or what is at fault for lousy internet connections, slow connections, or no connections: (everybody's tech dept. blames everybody else) The router? 10 year olds, maybe obsolete? The modem? came with the service plan - can connect three devices on a sharedconnection. The ISP: I read they not only even control and completely regulate bandwidth usage, but they also ration it!! (true?) So can I safely 'pull the plug' each night for security or not? thnx

    Read the article

  • How do I stop track changes from turning on automatically in Word 2007

    - by Benj
    Whenever I open an existing document in Word 2007 (on Windows XP), word turns on track changes, and changes the display mode to "Final" (that is, not "Final Showing Markup" -- so I often don't even notice track changes is on if I don't remember to pay attention. This happens for ALL existing documents, and doesn't happen for new documents. I can't find any option in the configuration that would control this behavior. I would like to restore the original/default behavior where documents are opening with Track Changes off, and in "Final showing markup" display. Steps to Reproduce Open Word 2007. Create a new document. Verify that track changes is off. Save the document and close Word. Open the document (either directly or through Word). Track changes is now on. Any ideas?

    Read the article

  • Samba as a PDC and offline authentication

    - by Aimé Barteaux
    Say I have a Windows laptop which has been connected to a domain. The domain has a Samba server as a PDC. Now say that I move the laptop outside of the network (the network is completely inaccessible). Will I be able to logon into accounts I have accessed before on the laptop (through GINA)? Update: Looking at the smb.comf documentation I noticed the setting winbind offline logon: This parameter is designed to control whether Winbind should allow to login with the pam_winbind module using Cached Credentials. If enabled, winbindd will store user credentials from successful logins encrypted in a local cache.. To me it looks like this solves the issue but can anyone else confirm it and/or point out if any additional values need to be set?

    Read the article

  • How to record desktop session with sound on Moblin?

    - by Moblin Newbie
    I have tried to record my desktop session with sound on a netbook running Moblin, but I can't seem to be able to record sound. xvidcap just says error accessing /dev/dsp. Are there some options I should pass to xvidcap? Should I use some other recording application? Update: I am using the latest xvidcap (1.1.7) and have read the FAQ. Unfortunately Moblin' gnome-volume-control looks nothing like what is linked to from the xvidcap FAQ; there is no way to to set or even look at the details the screenshot shows, as far as I know. alsamixer shows pulseaudio is used, if that gives anyone any clues. The device is Acer Aspire One.

    Read the article

  • setting up freedns with an existing domain

    - by romeovs
    I've been running a webserver off of a pc at a static IP succesfully for the past 5 months. recently however, I've moved into another appartment and my ISP only provides a dynamic IP (my IP changes from time to time). I'm not an internet genius but I was thinking to fix this by using a Dynamic DNS provider. So I got on the web and found freedns. I'm a bit confused about how to set up everything though. I've managed to succesfully install the IP updater daemon on my web server. Then, in my registrars control panel, I set the NS records to point at ns1 through ns4.afraid.org (removing the old NS records). I'm not certain what I should do with the A records though (for now they are still pointing to the old static IP address). I have A records for www, blog, irc, etc. but I cannot point them at my new IP address, because it isn't Could someone explain this in the clearest possible sense (perhaps elaborating on what happens at each step of the DNS process). I never really knew what the A records are for anyway. (note that I haven't really found any documentation at the freedns website, or on google)

    Read the article

  • I've inherited 200K lines of spaghetti code -- what now?

    - by kmote
    I hope this isn't too general of a question; I could really use some seasoned advice. I am newly employed as the sole "SW Engineer" in a fairly small shop of scientists who have spent the last 10-20 years cobbling together a vast code base. (It was written in a virtually obsolete language: G2 -- think Pascal with graphics). The program itself is a physical model of a complex chemical processing plant; the team that wrote it have incredibly deep domain knowledge but little or no formal training in programming fundamentals. They've recently learned some hard lessons about the consequences of non-existant configuration management. Their maintenance efforts are also greatly hampered by the vast accumulation of undocumented "sludge" in the code itself. I will spare you the "politics" of the situation (there's always politics!), but suffice to say, there is not a consensus of opinion about what is needed for the path ahead. They have asked me to begin presenting to the team some of the principles of modern software development. They want me to introduce some of the industry-standard practices and strategies regarding coding conventions, lifecycle management, high-level design patterns, and source control. Frankly, it's a fairly daunting task and I'm not sure where to begin. Initially, I'm inclined to tutor them in some of the central concepts of The Pragmatic Programmer, or Fowler's Refactoring ("Code Smells", etc). I also hope to introduce a number of Agile methodologies. But ultimately, to be effective, I think I'm going to need to hone in on 5-7 core fundamentals; in other words, what are the most important principles or practices that they can realistically start implementing that will give them the most "bang for the buck". So that's my question: What would you include in your list of the most effective strategies to help straighten out the spaghetti (and prevent it in the future)?

    Read the article

  • Project frozen - what should I leave to the people after me?

    - by Maistora
    So the project I've been working on is now going to be frozen indefinitely. It is possible that if and when the project unfreezes again, it won't be assigned to me or anybody from the current team. Actually, we inherited the project after it had been frozen before, but there was nothing left by the prior team to help us understand even the basic needs of the project, so we wasted a lot of time getting to know the project well. My question is what do you think we should do to help the people after us to best understand the needs of the project, what we have done, why we've done it, etc. I am open to other ideas of why should we leave some tracks to the others that will work on this project also. Some steps we already have taken: technical documentation (not full but at least there is some); source-control system history; estimations on which parts of the project need improvement and why we think so; bunch of unit tests. issue tracker with all the tickets we've done (EDIT) What do you think of what we've already prepared and what else can we do?

    Read the article

  • Windows 8 Activation - Product ID: Not available

    - by Guy Thomas
    The situation: I downloaded Windows 8 RTM from MSDN (I have a subscription). Naturally, I downloaded the product key as well. Windows 8 installed like a dream: lightning fast with no problems. I accepted the product key at the beginning of the install. Next, I thought I would download Updates, but they failed, so I checked the system's activation in Control Panel System. Problem: It returned "Product ID: Not available." There's nothing under "Windows activation" that I can click on, no blue links. I had a 'Chat' with MSDN, who introduced me to SLUI.exe. On Windows 8 it did nothing. (On Windows 7 it is supposed to bring up the Activation Menu). I phoned the Microsoft Activation number, they told me to contact MSDN. MSDN left the 'chat' by telling me to contact Microsoft! Hmm... I wonder if anyone at SuperUser can help?

    Read the article

  • Xmodmap configuration

    - by Krishna S
    On my Debian Linux machine Ctrl+Alt+F1 is bound to a virtual terminal. I can see the corresponding entry by running xmodmap -pke keycode 67 = F1 XF86_Switch_VT_1 F1 XF86_Switch_VT_1 Per this thread, which I might add is consistent with what I've read elsewhere, the columns on the right hand side of = correspond to key, Shift+key, AltGr+key and Shift+AltGr+key. Given that, I don't understand how the keycode mapping for F1 (above) works for Ctrl+Alt+F1. It seems it should really be either Shift+F1 or Shift+AltGr+F1? Here's the output of xmodmap -pm on my machine: shift Shift_L (0x32), Shift_R (0x3e) lock Caps_Lock (0x25) control Control_L (0x42), Control_R (0x69) mod1 Alt_L (0x40), Alt_R (0x6c), Meta_L (0xcd) mod2 Num_Lock (0x4d) mod3 mod4 Super_L (0x85), Super_R (0x86), Super_L (0xce), Hyper_L (0xcf) mod5 ISO_Level3_Shift (0x5c), Mode_switch (0xcb) Can anybody explain it?

    Read the article

  • Faster, Simpler access to Azure Tables with Enzo Azure API

    - by Herve Roggero
    After developing the latest version of Enzo Cloud Backup I took the time to create an API that would simplify access to Azure Tables (the Enzo Azure API). At first, my goal was to make the code simpler compared to the Microsoft Azure SDK. But as it turns out it is also a little faster; and when using the specialized methods (the fetch strategies) it is much faster out of the box than the Microsoft SDK, unless you start creating complex parallel and resilient routines yourself. Last but not least, I decided to add a few extension methods that I think you will find attractive, such as the ability to transform a list of entities into a DataTable. So let’s review each area in more details. Simpler Code My first objective was to make the API much easier to use than the Azure SDK. I wanted to reduce the amount of code necessary to fetch entities, remove the code needed to add automatic retries and handle transient conditions, and give additional control, such as a way to cancel operations, obtain basic statistics on the calls, and control the maximum number of REST calls the API generates in an attempt to avoid throttling conditions in the first place (something you cannot do with the Azure SDK at this time). Strongly Typed Before diving into the code, the following examples rely on a strongly typed class called MyData. The way MyData is defined for the Azure SDK is similar to the Enzo Azure API, with the exception that they inherit from different classes. With the Azure SDK, classes that represent entities must inherit from TableServiceEntity, while classes with the Enzo Azure API must inherit from BaseAzureTable or implement a specific interface. // With the SDK public class MyData1 : TableServiceEntity {     public string Message { get; set; }     public string Level { get; set; }     public string Severity { get; set; } } //  With the Enzo Azure API public class MyData2 : BaseAzureTable {     public string Message { get; set; }     public string Level { get; set; }     public string Severity { get; set; } } Simpler Code Now that the classes representing an Azure Table entity are defined, let’s review the methods that the Azure SDK would look like when fetching all the entities from an Azure Table (note the use of a few variables: the _tableName variable stores the name of the Azure Table, and the ConnectionString property returns the connection string for the Storage Account containing the table): // With the Azure SDK public List<MyData1> FetchAllEntities() {      CloudStorageAccount storageAccount = CloudStorageAccount.Parse(ConnectionString);      CloudTableClient tableClient = storageAccount.CreateCloudTableClient();      TableServiceContext serviceContext = tableClient.GetDataServiceContext();      CloudTableQuery<MyData1> partitionQuery =         (from e in serviceContext.CreateQuery<MyData1>(_tableName)         select new MyData1()         {            PartitionKey = e.PartitionKey,            RowKey = e.RowKey,            Timestamp = e.Timestamp,            Message = e.Message,            Level = e.Level,            Severity = e.Severity            }).AsTableServiceQuery<MyData1>();        return partitionQuery.ToList();  } This code gives you automatic retries because the AsTableServiceQuery does that for you. Also, note that this method is strongly-typed because it is using LINQ. Although this doesn’t look like too much code at first glance, you are actually mapping the strongly-typed object manually. So for larger entities, with dozens of properties, your code will grow. And from a maintenance standpoint, when a new property is added, you may need to change the mapping code. You will also note that the mapping being performed is optional; it is desired when you want to retrieve specific properties of the entities (not all) to reduce the network traffic. If you do not specify the properties you want, all the properties will be returned; in this example we are returning the Message, Level and Severity properties (in addition to the required PartitionKey, RowKey and Timestamp). The Enzo Azure API does the mapping automatically and also handles automatic reties when fetching entities. The equivalent code to fetch all the entities (with the same three properties) from the same Azure Table looks like this: // With the Enzo Azure API public List<MyData2> FetchAllEntities() {        AzureTable at = new AzureTable(_accountName, _accountKey, _ssl, _tableName);        List<MyData2> res = at.Fetch<MyData2>("", "Message,Level,Severity");        return res; } As you can see, the Enzo Azure API returns the entities already strongly typed, so there is no need to map the output. Also, the Enzo Azure API makes it easy to specify the list of properties to return, and to specify a filter as well (no filter was provided in this example; the filter is passed as the first parameter).  Fetch Strategies Both approaches discussed above fetch the data sequentially. In addition to the linear/sequential fetch methods, the Enzo Azure API provides specific fetch strategies. Fetch strategies are designed to prepare a set of REST calls, executed in parallel, in a way that performs faster that if you were to fetch the data sequentially. For example, if the PartitionKey is a GUID string, you could prepare multiple calls, providing appropriate filters ([‘a’, ‘b’[, [‘b’, ‘c’[, [‘c’, ‘d[, …), and send those calls in parallel. As you can imagine, the code necessary to create these requests would be fairly large. With the Enzo Azure API, two strategies are provided out of the box: the GUID and List strategies. If you are interested in how these strategies work, see the Enzo Azure API Online Help. Here is an example code that performs parallel requests using the GUID strategy (which executes more than 2 t o3 times faster than the sequential methods discussed previously): public List<MyData2> FetchAllEntitiesGUID() {     AzureTable at = new AzureTable(_accountName, _accountKey, _ssl, _tableName);     List<MyData2> res = at.FetchWithGuid<MyData2>("", "Message,Level,Severity");     return res; } Faster Results With Sequential Fetch Methods Developing a faster API wasn’t a primary objective; but it appears that the performance tests performed with the Enzo Azure API deliver the data a little faster out of the box (5%-10% on average, and sometimes to up 50% faster) with the sequential fetch methods. Although the amount of data is the same regardless of the approach (and the REST calls are almost exactly identical), the object mapping approach is different. So it is likely that the slight performance increase is due to a lighter API. Using LINQ offers many advantages and tremendous flexibility; nevertheless when fetching data it seems that the Enzo Azure API delivers faster.  For example, the same code previously discussed delivered the following results when fetching 3,000 entities (about 1KB each). The average elapsed time shows that the Azure SDK returned the 3000 entities in about 5.9 seconds on average, while the Enzo Azure API took 4.2 seconds on average (39% improvement). With Fetch Strategies When using the fetch strategies we are no longer comparing apples to apples; the Azure SDK is not designed to implement fetch strategies out of the box, so you would need to code the strategies yourself. Nevertheless I wanted to provide out of the box capabilities, and as a result you see a test that returned about 10,000 entities (1KB each entity), and an average execution time over 5 runs. The Azure SDK implemented a sequential fetch while the Enzo Azure API implemented the List fetch strategy. The fetch strategy was 2.3 times faster. Note that the following test hit a limit on my network bandwidth quickly (3.56Mbps), so the results of the fetch strategy is significantly below what it could be with a higher bandwidth. Additional Methods The API wouldn’t be complete without support for a few important methods other than the fetch methods discussed previously. The Enzo Azure API offers these additional capabilities: - Support for batch updates, deletes and inserts - Conversion of entities to DataRow, and List<> to a DataTable - Extension methods for Delete, Merge, Update, Insert - Support for asynchronous calls and cancellation - Support for fetch statistics (total bytes, total REST calls, retries…) For more information, visit http://www.bluesyntax.net or go directly to the Enzo Azure API page (http://www.bluesyntax.net/EnzoAzureAPI.aspx). About Herve Roggero Herve Roggero, Windows Azure MVP, is the founder of Blue Syntax Consulting, a company specialized in cloud computing products and services. Herve's experience includes software development, architecture, database administration and senior management with both global corporations and startup companies. Herve holds multiple certifications, including an MCDBA, MCSE, MCSD. He also holds a Master's degree in Business Administration from Indiana University. Herve is the co-author of "PRO SQL Azure" from Apress and runs the Azure Florida Association (on LinkedIn: http://www.linkedin.com/groups?gid=4177626). For more information on Blue Syntax Consulting, visit www.bluesyntax.net.

    Read the article

  • Forcing Acrobat Reader font

    - by Jack
    I have a netbook with Linpus Linux and I'm trying to open automatically generated documents with Acrobat Reader that use Verdana but without having it embedded inside the PDF file. Linpus doesn't come natively with any Verdana font so I had to install them inside /usr/share/fonts/by doing mkfontdirand fc-cacheto force a recache of the fonts. Then I've been able to select it inside other programs (eg. OpenOffice) but I'm still unable to open these PDFs. It seems that Acrobat is unable to find the font anyway. Since I have no control on how these PDFs are generated, is there a way to force Acrobat to use a specific font is the one it needs is unfound? Or maybe Acrobat needs a different kind of font configuration on Linux? Thanks in advance

    Read the article

  • Enabling Compiz Viewport Switcher key bindings

    - by David Moles
    I'm running compiz 0.8.2 with compizconfig on Scientific Linux 6.2 with Gnome 2.28.2. In the compizconfig "General Options" I have "Desktop Size" set as follows: Horizontal Virtual Size: 6 Vertical Virtual Size: 1 Number of Desktops: 1 This gets me the layout I want, i.e. 6 workspaces in a horizontal layout. Ctrl-alt-cursor-keys work fine for switching between them. However, I can't figure out how to get key bindings for specific workspaces. I've tried enabling "Viewport Switcher" in compizconfig, and tried various combinations both in "Number-based viewport switching" and "Go to specific viewport", to no apparent effect. My first thought was that something else was eating the specific key bindings I chose, but I think I've tried every combination of shift, control, alt and super (i.e., the Windows key) by now. I tried setting 6 desktops under "General Options" instead of one desktop with horizontal virtual size 6, but that doesn't seem to make a difference either. What am I missing?

    Read the article

  • How do I know if I'm getting the most out of my video card?

    - by b.long
    My computer at home is a bit lacking, so I want to make sure I'm getting the most out of it while I can. Generally speaking, here are the specs: 4GB Memory AMD Athlon(tm) 64 X2 Dual Core Processor 5200+ × 2 64-bit Ubuntu The terminal shows me the following: me@home:~$ uname -a Linux home 3.0.0-17-generic #30-Ubuntu SMP Thu Mar 8 20:45:39 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux me@home:~$ lspci | grep VGA 01:00.0 VGA compatible controller: ATI Technologies Inc RV380 [Radeon X600 (PCIE)] me@home:~$ sudo lshw -C video *-display:0 description: VGA compatible controller product: RV380 [Radeon X600 (PCIE)] vendor: ATI Technologies Inc physical id: 0 bus info: pci@0000:01:00.0 version: 00 width: 32 bits clock: 33MHz capabilities: pm pciexpress msi vga_controller bus_master cap_list rom configuration: driver=radeon latency=0 resources: irq:44 memory:e0000000-efffffff ioport:ac00(size=256) memory:fdef0000-fdefffff memory:fdec0000-fdedffff *-display:1 UNCLAIMED description: Display controller product: RV380 [Radeon X600] vendor: ATI Technologies Inc physical id: 0.1 bus info: pci@0000:01:00.1 version: 00 width: 32 bits clock: 33MHz capabilities: pm pciexpress bus_master cap_list configuration: latency=0 resources: memory:fdee0000-fdeeffff me@home:~$ lspci -nn | grep VGA 01:00.0 VGA compatible controller [0300]: ATI Technologies Inc RV380 [Radeon X600 (PCIE)] [1002:5b62] The additional drivers menu in System Settings shows me nothing useful and my attempt at installing ATI's Catalyst Control center (drivers that came with the video card) failed. I believe the latest version of Ubuntu at the time was 9.x. What should I do? Install an old version of Ubuntu 9? Use some alternative driver? UPDATE: I might try my hand at a bit from this answer next: "Installing Catalyst Manually (from AMD/ATI's site)" . From a terminal, fgl_glxgears returns *"fgl_glxgears: command not found"*. Any thoughts?

    Read the article

  • Computer never connects to the internet automatically on startup?

    - by RawR Crew
    I have my Windows Vista laptop connected directly to the router via an ethernet cable, and every time I switch the computer on, the computer cannot connect to the internet. It comes up as limited or no connectivity, I am assuming because it has not been assigned an IP address by the router - not too sure if this is right. The problem is usually fixed either by performing a repair through the networking control panel or removing and re-inserting the ethernet cable. It will also connect without doing any of these if the computer is left idle for about 20 minutes. It will connect fine wirelessly without the need for any of this, however I would prefer to connect via the cable. Any ideas how I can fix this? I have replaced the ethernet cable and router already (identical model) but these haven't helped. Thanks for any help on this.

    Read the article

  • Automatically cycle numerous or large files to the trash

    - by minameismud
    I've been tasked with fixing a vendor's program that, under certain conditions, dumps gigs of junk files into a log directory. It ends up filling users' machines. My task is to figure out how to make it stop without any source code or additional running processes, and without making the program kasplode. In other words, I'm looking to use a feature of the file system to control the growth. One idea I had was to make a hard link from that folder to NUL, as you might with /dev/null in the linux world. However, my attempts to use the mklink program to create a junction result in a message that says Local volumes are required to complete the operation. Any ideas on how to complete the junction, or other ideas to solve the problem?

    Read the article

  • Redirect to new domain and preserve username

    - by David Brown
    I recently switched to a new domain for a version control server I run. The server is usually accessed with a username included in the url such as https://[email protected]/some/stuff. I want to redirect requests to the old domain to the new domain and preserve everything else in the url (including the username). So the former url would be redirected to https://[email protected]/some/stuff. Currently I have the following rewrite condition and rule: RewriteCond %{HTTP_HOST} sub\.olddomain\.com RewriteRule (.*) https://sub.newdomain.net$1 [R=301,L] This works except it drops the userinfo part of the URL. Is there a way I can preserve the user info?

    Read the article

  • Presentation Plugin for NetBeans IDE 7.2

    - by Geertjan
    I got some excellent help from Mark Stephens, who is from IDR Solutions, which produces JPedal. Using the LGPL version of JPedal, and code provided by Mark, it's now possible to right-click the node that appears in the Presentation Window: ...after which, using a file browser (to locate a file on disk) or a URL (a very simple check is done, the URL must start with "http" and end with "pdf"), you can now open PDF files as images (thanks to conversion from PDF to images done by JPedal) into NetBeans IDE, typically (I imagine) for presentation purposes: Note that you should consider the plugin in "alpha" state. But, despite that, I've had good results. Try it and use the URL below, as a control test (since it works fine for me), which produces the result shown above: http://edu.netbeans.org/contrib/slides/netbeans-platform/presentation-4-actions.pdf  However, for some PDFs, the plugin doesn't work, and I don't know why yet (trying to figure it out with Mark), resulting in this stack trace: java.lang.ArrayIndexOutOfBoundsException: 8 at org.jpedal.objects.acroforms.formData.SwingData.completeField(Unknown Source) at org.jpedal.objects.acroforms.rendering.DefaultAcroRenderer.createField(Unknown Source) at org.jpedal.objects.acroforms.rendering.DefaultAcroRenderer.createDisplayComponentsForPage(Unknown Source) at org.jpedal.PDFtoImageConvertor.convert(Unknown Source) at org.jpedal.PdfDecoder.getPageAsImage(Unknown Source) at org.jpedal.PdfDecoder.getPageAsImage(Unknown Source) Here's the location of the plugin, install it into NetBeans IDE 7.2; feedback is very welcome: http://plugins.netbeans.org/plugin/44525

    Read the article

< Previous Page | 518 519 520 521 522 523 524 525 526 527 528 529  | Next Page >