Search Results

Search found 5435 results on 218 pages for 'ones and zeroes'.

Page 104/218 | < Previous Page | 100 101 102 103 104 105 106 107 108 109 110 111  | Next Page >

  • Does purposely linking to an invalid URL and then using 301 affect SEO?

    - by Mike
    On a section of my site, I am currently using .htaccess rewrites to put the ID as part of the URL instead of in the query, like so: RewriteRule ^([a-z_]+)?/?tours/([0-9]+)/(.*) /tours/tour_text.php?lang=$1&id=$2&urlstr=$3 [L] For example, if someone goes to /en/tours/12/some-text-here it will rewrite it to /tours/tour_text.php?lang=en&id=12&urlstr=some-text-here. However I don't want the users to be able to put just any text, so if they type in the wrong some-text-here part it will 301 redirect them to the right page. This works perfectly, but I can see a potential problem potential arising when localizing the website, so I just wanted to make sure it's not actually a problem. How it is now, if someone goes to /en/tours/12/some-text-here, the anchor to the Spanish version of that page will be /es/tours/12/some-text-here (i.e. only changing the "en" to "es"), and then the script will then 301 them to the correct Spanish text (something like /es/tours/12/algun-texto-aqui). And the reverse will also be the same. The anchor on the Spanish version to the English version would be /en/tours/12/algun-texto-aqui and then they will be forwarded with 301 back to /en/tours/12/some-text-here. Basically, the anchor changes the language and the 301 changes the string at the end. So I have two questions: Does purposely and permanently having invalid URLs on your site that get 301'ed to the correct ones have any effect on SEO? I could make it just show the correct URL to begin with, but this is a significant amount of work due to how I am handling the translations, so I would prefer just to 301 them. Will the invalid URLs that are contained in the links be added to the search engine indexes even if they get 301'ed to another page?

    Read the article

  • How to cover the widest range of computers when publishing?

    - by DevilWithin
    When you plan a game, or even when you already made a game, and its time to publish, you wonder how much of your audience is covered by the game technology demands. I'm directing this essentialy to casual games, as I constantly see people having old laptops and being unable to replace them. Laptops with integrated cards whose OpenGL version doesn't even support textures larger than 1024x1024. These people may be avid gamers as well, and a reasonable share of the audience to consider giving them the chance to play casual games, once they cannot play any blockbusters. As I've seen happening, a very "noticeable" example is Angry Birds. It's gameplay is merely casual (I think nobody disagrees here) and still, it uses so high resolution textures that at least OpenGL 2.0 or around is needed, which blocks away a lot of people. So, the actual question is: what is a good tradeoff for this issue? Would it be better to just sacrifice the texture resolution for everyone, but have more supported hardware? Would it be better to keep the high quality and just slice the textures into smaller ones, sacrificing the performance a little bit? What else? Any ideas about this topic are welcome for discussion.

    Read the article

  • How to manage drawing loop when changing render targets

    - by George Duckett
    I'm managing my game state by having a base GameScreen class with a Draw method. I then have (basically) a stack of GameScreens that I render. I render the bottom one first, as screens above might not completely cover the ones below. I now have a problem where one GameScreen changes render targets while doing its rendering. Anything the previous screens have drawn to the backbuffer is lost (as XNA emulates what happens on the xbox). I don't want to just set the backbuffer to preserve its contents as I want this to work on the xbox as well as PC. How should I manage this problem? A few ideas I've had: Render every GameScreen to its own render target, then render them all to the backbuffer. Create some kind of RenderAction queue where a game screen (and anything else I guess) could queue something to be rendered to the back buffer. They'd render whatever they wanted to any render target as normal, but if they wanted to render to the backbuffer they'd stick that in a queue which would get processed once all rendertarget rendering was done. Abstract away from render targets and backbuffers and have some way of representing the way graphics flows and transforms between render targets and have something manage/work out the correct rendering order (and render targets) given what rendering process needs as input and what it produces as output. I think each of my ideas have pros and cons and there are probably several other ways of approaching this general problem so I'm interested in finding out what solutions are out there.

    Read the article

  • Script/tool to import series of snapshots, each being a new edition, into GIT, populating source tree?

    - by Rob
    I've developed code locally and taken a fairly regular snapshot whenever I reach a significant point in development, e.g. a working build. So I have a long-ish list of about 40 folders, each folder being a snapshot e.g. in ascending date YYYYMMDD order, e.g.:- 20100523 20100614 20100721 20100722 20100809 20100901 20101001 20101003 20101104 20101119 20101203 20101218 20110102 I'm looking for a script to import each of these snapshots into GIT. The end result being that the latest code is the same as the last snapshot, and other editions are accessible and are as numbered. Some other requirements: that the latest edition is not cumulative of the previous snapshots, i.e., files that appeared in older snapshots but which don't appear in later ones (e.g. due to refactoring etc.) should not appear in the latest edition of the code. meanwhile, there should be continuity between files that do persist between snapshots. I would like GIT to know that there are previous editions of these files and not treat them as brand new files within each edition. Some background about my aim: I need to formally revision control this work rather than keep local private snapshot copies. I plan to release this work as open source, so version controlling would be highly recommended I am evaluating some of the current popular version control systems (Subversion and GIT) BUT I definitely need a working solution in GIT as well as subversion. I'm not looking to be persuaded to use one particular tool, I need a solution for each tool I am considering. (I haved posted an answer separately for each tool so separate camps of folks who have expertise in GIT and Subversion will be able to give focused answers on one or the other). The same but separate question for Subversion: Script/tool to import series of snapshots, each being a new revision, into Subversion, populating source tree?

    Read the article

  • Having troubles with LibNoise.XNA and generating tileable maps

    - by Jon
    Following up on my previous post, I found a wonderful port of LibNoise for XNA. I've been working with it for about 8 hours straight and I'm tearing my hair out - I just can not get maps to tile, I can't figure out how to do this. Here's my attempt: Perlin perlin = new Perlin(1.2, 1.95, 0.56, 12, 2353, QualityMode.Medium); RiggedMultifractal rigged = new RiggedMultifractal(); Add add = new Add(perlin, rigged); // Initialize the noise map int mapSize = 64; this.m_noiseMap = new Noise2D(mapSize, perlin); //this.m_noiseMap.GeneratePlanar(0, 1, -1, 1); // Generate the textures this.m_noiseMap.GeneratePlanar(-1,1,-1,1); this.m_textures[0] = this.m_noiseMap.GetTexture(this.graphics.GraphicsDevice, Gradient.Grayscale); this.m_noiseMap.GeneratePlanar(mapSize, mapSize * 2, mapSize, mapSize * 2); this.m_textures[1] = this.m_noiseMap.GetTexture(this.graphics.GraphicsDevice, Gradient.Grayscale); this.m_noiseMap.GeneratePlanar(-1, 1, -1, 1); this.m_textures[2] = this.m_noiseMap.GetTexture(this.graphics.GraphicsDevice, Gradient.Grayscale); The first and third ones generate fine, they create a perlin noise map - however the middle one, which I wanted to be a continuation of the first (As per my original post), is just a bunch of static. How exactly do I get this to generate maps that connect to each other, by entering in the mapsize * tile, using the same seed, settings, etc.?

    Read the article

  • Use a custom domain and point to Tumblr blog

    - by jskye
    My domain mydomain.com is registered with GoDaddy. I wish to host my Tumblr blog on this domain with Nearly Free Speech hosting. My active nameservers at GoDaddy already point to my authoritative ones at Nearly Free Speech which is working. However I'm baffled as to how to get my correct configuration to point to my Tumblr. Preferably I'd like (A) my domain http://mydomain.com to host the blog and have http://www.mydomain.com redirect also to http://mydomain.com. If this is too difficult my next preference is (B) to have http://www.mydomain.com host the blog whilst http://mydomain.com redirects to http://www.mydomain.com My third preference is to have (C) a sub-domain like http://tumblr.mydomain.com or http://tumblr.mydomain.com to host the blog and I guess have http://mydomain.com and http://www.mydomain.com both redirect to it. I've tried having two aliases mydomain.com and www.mydomain.com pointing to my permanent Nearly Free Speech IP at mydomain.nfshost.com and when I try to add: (1) an A record pointing mydomain.com to the IP 66.6.44.4 as per Tumblr's instructions it tells me I already have the bare domain as an alias so I cant do that. (2) the A record on the www.mydomain.com alias. I can do this with either www.mydomain.com set as an alias or not. But when I tried this with mydomain.com set as the canonical name the result when visiting either mydomain.com or www.mydomain.com was both of them continually redirecting to each other until an error was thrown. So I was wondering if there is a ninja that could save me some hair-pulling and tell me the correct way to config A, or else B, or else C.

    Read the article

  • Oracle OpenWorld Session: “Business Driven Development with BPM: Lessons from the Real World”

    - by Ajay Khanna
    One of key values that BPM promises is “Business Empowerment”. People closest to the processes, who participate in the process every day, are the ones who know most about the process. These are the people who run day-to-day operations, people who triage customer issues, people who envision improvements and innovations. It is, therefore, imperative that when a company decides to use BPM technology to automate their business processes, business people take the driver’s seat. BPM is not an IT only project. Oracle BPM suite has been designed keeping this core tenet of BPM, Business Empowerment, in mind. The result is business user centered design of Process Composer. Process Composer is designed to let business users document their processes, analyze them using simulation, create web forms, specify business rules and even run them in testing mode using process player, to see if the designed process meets their needs. This does not mean that IT has no role in this process. In fact, Oracle BPM Suite has made it very easy for Business and IT to collaborate. The same process can be shared among business, and IT stakeholders and each can collaborate to create model-driven, process based executable applications. A process may need to integrate with multiple systems via various mechanisms, and IT leads system and data integration effort. IT helps fine tune the performance of process applications and ensures that the deployment of process application meets scalability and failover standards. In this session, we saw Harish Gaur and Satya Narayanan from Oracle demonstrate roles Business and IT play in BPM projects and how Oracle BPM Suite enables business and IT collaboration to design and automate process based applications. They also discussed real life customer stories. Some key takeaways from this session: There are no IT projects, only business initiatives, requiring IT support Identify high impact processes – critical, better BPM ROI Identify key metrics to measure process performance Align business with IT layer

    Read the article

  • CMS for coding blog

    - by OrgnlDave
    I've got a server with a LAMP stack and such. I'd like to host a blog-type site (or if there's a free place good for this, that would be cool!) that covers a variety of tutorials, interesting content, etc. There are tons of CMS's out there but if you search for tips on ones that do programming type things well, you get tons of hits about web development. I'd like to know if anyone here has recommendations from actually using a CMS for this type of thing or, short of that, can recommend one - not based on generalities like "Joomla! is great!" I'm looking for the least setup time possible. I'm proficient with CSS and I can design a color scheme, so that's not a big problem. As you can expect, attaching files, pictures, and syntax highlighting are musts (C/C++ ish is good). Ability to group posts, perhaps use tags, etc. would be cool too, but not necessary. As I'm writing this, it almost sounds like it'd be easier to custom-code a small PHP site myself.

    Read the article

  • How do I find out which process is eating up my bandwidth?

    - by Bruce Connor
    I think I'm being the victim of a bug here. Sometimes while I'm working (I still don't know why), my network traffic goes up to 200 KB/s and stays that way, even tough I'm not doing anything internet-related. This sometimes happens to me with the CPU usage. When it does, I just run a top command to find out which process is responsible and then kill it. Problem is: I have no way of knowing which process is responsible for my high network usage. Both the resource monitor and the top command only tell me my total network usage, neither of them tells me process specific network info. Is there another command I can use to find out which process is getting out of hand? I've already tried killing all the obvious ones (firefox, update-manager, pidgin, etc) with no luck. So far, restarting the machine is the only way I found of getting rid of the issue. EDIT: (just to be clear) I've found questions here about monitoring total bandwidth usage, but, as I mentioned, that's not what I need. UPDATE: The command iftop gives results that disagree entirely with the information reported by System Monitor. While the latter claims there's high network traffic, the former claims there's barely 1 KB/s. Thanks

    Read the article

  • Google analytics - drop in traffic

    - by user1001421
    Bit of a general question here. We are in the process of converting a number of our clients from older web sites to new ones. The problem we are getting, and sorry for being so general here, is we are getting a sharp decline in traffic as reported on Google Analytics. It's not a gradual decline, it seems to hit almost as soon as the new site goes live. I've just got a few questions to see if there is something we are doing wrong: a) We are using the same analytics accounts going from old to new site. Is this a bad idea? b) The actual analytics code is integrated into the pages using a server-side include. IS this a bad idea? c) We structure our sites differently to our old site. IE. The old sites would pretty must have all the web pages in the root directory, and hyperlinks would be linked to the page files: EG. <a href="somepage.aspx">Link</a> Our new sites now have a directory structure that pretty much reflects the navigation structure, and hyper links link to the pages directory instead of the actual page: EG. <a href="/new-items/shoes/">New shoes</a> Is this a bad idea. I'm really searching for a needle in a haystack here. Would appriciate any help or advice as to why we are getting such a sharp and sudden drop in traffic. Again, so this is such a general question. Thanks in advance.

    Read the article

  • Help with the RT3290 Wireless adapter

    - by Potek
    I know there is a lot of questions about RT3290 wireless adapter, but I read many many of them and noone helped me with my problem. So, I have the HP Pavilion dm1 laptop and I installed Xubuntu 13.10 on it. During the instalation it poped up that I'm able to connect to the internet via network adapter. I did it obviously, but it worked for like 5 seconds, then it said that network connection is down and I couldn't connect to my router, even if i could actually see network connections avaible. Then (when installation ended) I rebooted my computer. My network was avaible! I could connect and browse the internet smoothly. Next day I turned on my laptop and I saw my wireless connection is no longer avaible, so I plugged laptop to the internet via Ethernet cable and started to explore internet searching solution for my problem. I did this: How do I get a Ralink RT3290 wireless card working? And I was able to connect BUT my kernel started to panic every time I started Mozilla or any program that is connecting to the internet. Every tip i searched was almost the same as the link above. I tried to do this many ways. I even Reinstalled Xubuntu to do everything with clear system but the same thing happened. THEN I installed Linux Mint to check whether it is a major or just Xubuntu problem. Linux Mint responded even worse, because I wasn't even able to use terminal (I clearly messed something up). I would really, really appreciate every help, because I do want to solve this problem and finally be able to use Xubuntu/Ubuntu. I'm waiting for advice from anyone patiently. If anyone wants some details, just tell me which ones.

    Read the article

  • About online game servers and how to handle data

    - by TreantBG
    So my question isn't about what technology to use or how to do this or that, but a more general question. I'm currently developing a action third person shooter. With elements of RPG - weapon,armor upgrades and items. Players will be able to create new games or join old ones. So my question is how to create the game server that players will play in. I have two ideas on my mind. The player who made the game is the server. All data passes trough him and he send this data to the server updating the database of the players with their XP points kills/deaths score and other. Or my host machine is the server, the player who made the game just will open new instance on my host and will be like client. And all players send their input data to the host, the host updates the game and send response back to client for any new changes like where is the enemy and other. And if i choose option 1 is there a chance the host to change the game content and manipulate the game results? (I think there is but i'm not sure) And if i choose option 2 isn't that raising the response time and potentially the game lag? or maybe there is another option?

    Read the article

  • Advice on software infrastructure for a FLOSS bounty site

    - by michaeljt
    I am planning to set up a simple web site where people can offer bounties for work on FLOSS projects. Unfortunately I have no experience at web development (I am a C/C++ developer), so I was hoping someone might be able to suggest out-of-the-box packages (preferably Debian ones) I could use to build the site from. My idea of how the site would work is to keep things as simple as possible. The person proposing a bounty would enter a description with relevant links (particularly to a bugtracker entry with the project the work is to be done on, where the real discussion and work would take place) and information and place an initial contribution. Other people would be able to add (donate, not pledge) contributions, but any discussion would take place on the project's bugtracker. I am also planning to run a mailing list rather than a forum (at least initially), so that is not a requirement. Paypal seems to me to be the handiest payment mechanism. So overall what I need is probably a simple interface with Paypal integration and a simple database backend. I hope this is the right place for my question, if not I would be grateful for pointers to somewhere better. And of course, this is purely about the technical side, though I am more than happy to discuss other aspects of the project elsewhere.

    Read the article

  • Distributed Development Tools -- (Version control and Project Management)

    - by Macy Abbey
    I've recently become responsible for choosing which source control and project management software to use for a company that employs me. Currently it uses Jira (project management) and Subversion (version control). I know there are many other options out there -- the ones I know about are all in this article http://mashable.com/2010/07/14/distributed-developer-teams/ . I'm leaning towards recommending they just stay with what they have as it seems workable and any change would have to be worth the cost of switching to say github/basecamp or some other solution. Some details on the team: It's a distributed development shop. Meetings of the whole team in one room are rare. It's currently a very small development team (three developers). The project management software is used by developers and a product manager or two. What are you experiences with version control and project management web applications? Are there any you would recommend and you think are worth the switching cost of time to learn new services / implementing the change? Edit: After educating myself further on the options it appears DVCS offer powerful benefits that may be worth investing in now as opposed to later in the company's lifetime when the switching cost is higher: I'm a Subversion geek, why I should consider or not consider Mercurial or Git or any other DVCS?

    Read the article

  • Refactoring and Open / Closed principle

    - by Giorgio
    I have recently being reading a web site about clean code development (I do not put a link here because it is not in English). One of the principles advertised by this site is the Open Closed Principle: each software component should be open for extension and closed for modification. E.g., when we have implemented and tested a class, we should only modify it to fix bugs or to add new functionality (e.g. new methods that do not influence the existing ones). The existing functionality and implementation should not be changed. I normally apply this principle by defining an interface I and a corresponding implementation class A. When class A has become stable (implemented and tested), I normally do not modify it too much (possibly, not at all), i.e. If new requirements arrive (e.g. performance, or a totally new implementation of the interface) that require big changes to the code, I write a new implementation B, and keep using A as long as B is not mature. When B is mature, all that is needed is to change how I is instantiated. If the new requirements suggest a change to the interface as well, I define a new interface I' and a new implementation A'. So I, A are frozen and remain the implementation for the production system as long as I' and A' are not stable enough to replace them. So, in view of these observation, I was a bit surprised that the web page then suggested the use of complex refactorings, "... because it is not possible to write code directly in its final form." Isn't there a contradiction / conflict between enforcing the Open / Closed Principle and suggesting the use of complex refactorings as a best practice? Or the idea here is that one can use complex refactorings during the development of a class A, but when that class has been tested successfully it should be frozen?

    Read the article

  • Update Google Sitemap for Mobile

    - by dimo414
    I have a series of utilities to generate Google sitemaps for my whole site. These files are massive, and slow to build. We want to start telling Google these pages are mobile-crawl-able too, by adding them to mobile sitemaps, but the documentation is unclear if I need to specify physically different files for my mobile URLs than for my normal ones. If this is my current sitemap: <?xml version="1.0" encoding="UTF-8" ?> <urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9"> <url> <loc>http://mobile.example.com/article100.html</loc> </url> </urlset> Can I simply change it to: <?xml version="1.0" encoding="UTF-8" ?> <urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9" xmlns:mobile="http://www.google.com/schemas/sitemap-mobile/1.0"> <url> <loc>http://mobile.example.com/article100.html</loc> <mobile:mobile/> </url> </urlset> Or do I need to create new files with the additional markup, alongside my existing files?

    Read the article

  • Determine All SQL Server Table Sizes

    Im doing some work to migrate and optimize a large-ish (40GB) SQL Server database at the moment.  Moving such a database between data centers over the Internet is not without its challenges.  In my case, virtually all of the size of the database is the result of one table, which has over 200M rows of data.  To determine the size of this table on disk, you can run the sp_TableSize stored procedure, like so: EXEC sp_spaceused lq_ActivityLog This results in the following: Of course this is only showing one table if you have a lot of tables and need to know which ones are taking up the most space, it would be nice if you could run a query to list all of the tables, perhaps ordered by the space theyre taking up.  Thanks to Mitchel Sellers (and Gregg Starks CURSOR template) and a tiny bit of my own edits, now you can!  Create the stored procedure below and call it to see a listing of all user tables in your database, ordered by their reserved space. -- Lists Space Used for all user tablesCREATE PROCEDURE GetAllTableSizesASDECLARE @TableName VARCHAR(100)DECLARE tableCursor CURSOR FORWARD_ONLYFOR select [name]from dbo.sysobjects where OBJECTPROPERTY(id, N'IsUserTable') = 1FOR READ ONLYCREATE TABLE #TempTable( tableName varchar(100), numberofRows varchar(100), reservedSize varchar(50), dataSize varchar(50), indexSize varchar(50), unusedSize varchar(50))OPEN tableCursorWHILE (1=1)BEGIN FETCH NEXT FROM tableCursor INTO @TableName IF(@@FETCH_STATUS<>0) BREAK; INSERT #TempTable EXEC sp_spaceused @TableNameENDCLOSE tableCursorDEALLOCATE tableCursorUPDATE #TempTableSET reservedSize = REPLACE(reservedSize, ' KB', '')SELECT tableName 'Table Name',numberofRows 'Total Rows',reservedSize 'Reserved KB',dataSize 'Data Size',indexSize 'Index Size',unusedSize 'Unused Size'FROM #TempTableORDER BY CONVERT(bigint,reservedSize) DESCDROP TABLE #TempTableGO Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • What functionality should a (basic) mock framework have?

    - by user1175327
    If i would start on writing a simple Mock framework, then what are the things that a basic mock framework MUST have? Obviously mocking any object, but what about assertions and perhaps other things? When I think of how I would write my own mock framework then I realise how much I really know (or don't know) and what I would trip up on. So this is more for educational purposes. Of course I did research and this is what i've come up with that a minimal mocking framework should be able to do. Now my question in this whole thing is, am I missing some important details in my ideas? Mocking Mocking a class: Should be able to mock any class. The Mock should preserve the properties and their original values as they were set in the original class. All method implementations are empty. Calls to methods of Mock: The Mock framework must be able to define what a mocked method must return. IE: $MockObj->CallTo('SomeMethod')->Returns('some value'); Assertions To my understanding mocking frameworks also have a set of assertions. These are the ones I think are most important (taken from SimpleTest). expect($method, $args) Arguments must match if called expectAt($timing, $method, $args) Arguments must match when called on the $timing'th time expectCallCount($method, $count) The method must be called exactly this many times expectMaximumCallCount($method, $count) Call this method no more than $count times expectMinimumCallCount($method, $count) Must be called at least $count times expectNever($method) Must never be called expectOnce($method, $args) Must be called once and with the expected arguments if supplied expectAtLeastOnce($method, $args) Must be called at least once, and always with any expected arguments And that's basically, as far as I understand, what a mock framework should be able to do. But is this really everything? Because it currently doesn't seem like a big deal to build something like this. But that's also the reason why I have the feeling that i'm missing some important details about such a framework. So is my understanding right about a mock framework? Or am i missing alot of details?

    Read the article

  • Windows Embedded Compact 7

    - by Valter Minute
    This will be the official name of the new release of Windows CE. Windows Embedded Compact 7 is available as a public CTP and it already supports a wide range of CPUs and both the device emulator and VirtualPC emulated environments. So I’ll have to learn a new (and longer) name for my favorite OS… but I (and all my two readers!) will be able to test it as soon as the download from connect web site completes (I'm sorry for my readers, but you'll have to download it by yourselves). Here’s a link for the download (it's free but you’ll have to register on connect with a valid LiveId): https://connect.microsoft.com/windowsembeddedce Remember that this is still a beta (or “Community Technology Preview” if you speak marketing language) and so it’s better to not install it on your main development PC (or, at least, backup everything before installation) and that the features and performances you’ll get from this beta may not be the same ones of the final release of the OS. You can discover the new features of Windows Embedded Compact on the new “official” webpage on microsoft website: http://www.microsoft.com/windowsembedded/en-us/products/windowsce/compact7.mspx or on Olivier’s blog: http://blogs.msdn.com/b/obloch/archive/2010/06/01/windows-embedded-compact-7-announced-and-public-ctp-available.aspx I hope to be able to post some interesting content about Windows Embedded Compact 7 soon (and maybe be able to shorten it’s name in CE7 in my blog posts, when I'll ensure that both my readers are not worketing for Microsoft's marketing department …). Technorati Tags: "Windows Embedded Compact 7"

    Read the article

  • 12.10 upgrade broke brightness keys [closed]

    - by Chris Morgan
    I have been running Ubuntu (64-bit) on my HP 6710b laptop (Core 2 Duo with integrated graphics) for several years, and the backlight brightness keys have always worked. Since I upgraded to Ubuntu 12.10 earlier today, those keys do not work any more. The secondary function keys: Fn+F3: sleep; still works (and considerably faster than ever before!) Fn+F8: battery info; still works Fn+F9: reduce brightness; stopped working in 12.10 Fn+F10: increase brightness; stopped working in 12.10 It may also be worth while mentioning that X does not appear to be receiving the brightness events at all, or at least not sending them out further. (This I detected with a key logger I wrote for a Uni project, which uses X's Record extension; it is informed of the sleep and battery info keystrokes, but doesn't receive the brightness ones at all.) In the mean time, I know that I can use the Brightness & Lock settings screen to alter the brightness. (Wow! I can suddenly make my backlight darker than I could before—I can go right down to turning the backlight off, something I couldn't do before... but this model has a fairly dim screen, so I don't expect to use that much, if ever.) How can I get the brightness keys working again? This question is probably strongly related to I can't control my Brightness in HP Compaq 6710s.

    Read the article

  • Can't install Skype on Ubuntu 12.04 64 bit

    - by Gabriel Alvim
    I've tried many different ways: I downloaded the file on the Skype website, which returned me this error "Cannot install ia-32-libs" I followed this instructions https://help.ubuntu.com/community/Skype and here is what I got: **W: Failed to fetch http://archive.canonical.com/ubuntu/dists/precise/partner/Packages/binary-amd64/Packages 404 Not Found [IP: 91.189.92.191 80] W: Failed to fetch http://archive.canonical.com/ubuntu/dists/precise/partner/Packages/binary-i386/Packages 404 Not Found [IP: 91.189.92.191 80] E: Some index files failed to download. They have been ignored, or old ones used instead.** I even tried this command line: sudo apt-get install lib32stdc++6 lib32asound2 ia32-libs libc6-i386 lib32gcc1 skype And this is what I got: **Error: need a repository as argument pandora@ubuntu:~$ sudo apt-get install lib32stdc++6 lib32asound2 ia32-libs libc6-i386 lib32gcc1 skype Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: ia32-libs : Depends: ia32-libs-multiarch skype : Depends: skype-bin E: Unable to correct problems, you have held broken packages.** I just don't know what to do anymore, if I can't use skype, I might as well not use ubuntu at all. Please, someone help

    Read the article

  • Developing wheel reinventing tendencies into a skill as opposed to reluctantly learning wheel-finding skills? [duplicate]

    - by Korey Hinton
    This question already has an answer here: Is reinventing the wheel really all that bad? 20 answers I am more of a high-level wheel reinventor. I definitely prefer to make use of existing API features built into a language and popular third-party frameworks that I know can solve the problem, however when I have a particular problem that I feel capable of solving within a reasonable time I am very reluctant to find someone else's solution. Here are a few reasons why I reinvent: It takes time to learn a new API API restrictions might exist that I don't know about Avoiding re-work of unfamiliar code I am conflicted between doing what I know and shifting to a new technique I don't feel comfortable with. On one hand I feel like following my instincts and getting really good at solving problems, especially ones that I would never challenge myself with if all I did was try to find answers. And on the other hand I feel like I might be missing out on important skills like saving time by finding the right framework and expanding my knowledge by learning how to use a new framework. I guess my question comes down to this: My current attitude is to stick to the built-in API and APIs I know well* and to not spend my time searching github for a solution to a problem I know I can solve myself within a reasonable amount of time. Is that a reasonable balance for a successful programmer? *Obviously I will still look around for new frameworks that save time and solve/simplify difficult problems.

    Read the article

  • HCM: North America: Year End Knowledge Content References

    - by CaroleB
    As we all know, the next couple of months will be busy ones for the Payroll and IT department in relation to preparing for Year End,as a means of assisting you to find documented knowledge in reference to North American (NA) Year End, the following reference guide has been put together: General Knowledge: Doc ID 404478.1 Americas (US, CA, MX) HCM High Priority Alert Doc ID 1577601.1 North American Year End 2013 / 2014 Year Begin Patch Information and Useful Links. Monitor this note as it will be updated as new information becomes available NA Year End Processing: Document 255466.1 - End of Year Processing Using Oracle HRMS (US)  Document 260344.1 - End Of Year Processing Using Oracle HRMS (Canada) Document 395622.1 - End Of Year Processing Using Oracle HRMS (Mexican) Patching : Document 216109.1 - Oracle Human Resources (HRMS) Payroll North America Annual Patching Schedule Document 1160507.1 - Oracle E-Business Suite - Consolidated HRMS Mandatory Patch List Document 1144633.1 - US Year End Patch Flow Advisor: E-Business Suite (EBS) Human Capital Management (HCM) for US Legislation patching 2013 YE Phase I Readme's US Document 1584795.1 Release 11i   - 2013 US Payroll Year End Phase 1 Readme Document 1584796.1 Release 12.0 - 2013 US Payroll Year End Phase 1 Readme Document 1584797.1 Release 12.1 - 2013 US Payroll Year End Phase 1 Readme CA Document 1585365.1 2013 Canadian Payroll Year End Phase 1 Readme Release 11i Document 1585366.1 2013 Canadian Payroll Year End Phase 1 Readme Release 12.0 Document 1585367.1 2013 Canadian Payroll Year End Phase 1 Readme Release 12.1 Known Issues / How To: Document 1527958.2 - Information Center: Oracle HRMS (US) (All Application Versions) Look specifically at the US- Year End Tab for information on: Year End Pre-Processor 1099R Federal, State, and Local Magnetic Media W-2 Paper Reports W-2 PDF W-2 Register Additional Resources: Webcast: Document 1455851.1 - Advisor Webcasts for Oracle E-Business Suite- Human Capital Management (HCM) Document 1592483.1 - Webcast: EBS North American Payroll Year End Process Flow November 20, 2013 at 3:30 pm ET, 2:30 pm CT, 1:30 pm MT, 12:30 pm PT Communities: Payroll – EBS HCM - EBS Community E-Business Patching Community

    Read the article

  • How do I create my own programming language and a compiler for it

    - by Dave
    I am thorough with programming and have come across languages including BASIC, FORTRAN, COBOL, LISP, LOGO, Java, C++, C, MATLAB, Mathematica, Python, Ruby, Perl, Javascript, Assembly and so on. I can't understand how people create programming languages and devise compilers for it. I also couldn't understand how people create OS like Windows, Mac, UNIX, DOS and so on. The other thing that is mysterious to me is how people create libraries like OpenGL, OpenCL, OpenCV, Cocoa, MFC and so on. The last thing I am unable to figure out is how scientists devise an assembly language and an assembler for a microprocessor. I would really like to learn all of these stuff and I am 15 years old. I always wanted to be a computer scientist some one like Babbage, Turing, Shannon, or Dennis Ritchie. I have already read Aho's Compiler Design and Tanenbaum's OS concepts book and they all only discuss concepts and code in a high level. They don't go into the details and nuances and how to devise a compiler or operating system. I want a concrete understanding so that I can create one myself and not just an understanding of what a thread, semaphore, process, or parsing is. I asked my brother about all this. He is a SB student in EECS at MIT and hasn't got a clue of how to actually create all these stuff in the real world. All he knows is just an understanding of Compiler Design and OS concepts like the ones that you guys have mentioned (ie like Thread, Synchronisation, Concurrency, memory management, Lexical Analysis, Intermediate code generation and so on)

    Read the article

  • Data recovery on Ubuntu 11.10?! (after crashing with Seagate 320GB)

    - by Sam
    Just installed 11.10 last week and decided to transfer iTunes music (from Windows dual boot) to my Seagate 320GB. I left it in, restarted, clicked Ubuntu at the boot screen, and then it froze after a few lines of code! I think I got to 3.7086 or something before I pressed CTRL+ALT+DEL and the system restarted after another few lines of code. I am completely new to Ubuntu so after Googling, I made a live CD with 10.04, the most stable release I've heard, and I'm typing this from there now. However, when I go to mount my partition, only the Windows Vista partition (308GB) is there! It has all my Windows files but my Ubuntu 11.10 ones are nowhere to be found. I need to restore these pictures I transferred from my camera using Shotwell the other day... any help is appreciated! p.s. 11.10 has never crashed on me in my trial week, so I'm guessing it's the Seagate hard drive's fault. However, now I'm running it on 10.04 and it works fine.

    Read the article

< Previous Page | 100 101 102 103 104 105 106 107 108 109 110 111  | Next Page >