Search Results

Search found 1696 results on 68 pages for 'textbook mistake'.

Page 27/68 | < Previous Page | 23 24 25 26 27 28 29 30 31 32 33 34  | Next Page >

  • What is the best way for an experienced developer to work on a WordPress blog

    - by nanothief
    I'm beginning to work on my first WordPress blog, however I've noticed most tutorials just have you do modifications (such as theme changes, installing plugins) on the production site. This worries me for a few reasons: No backups No version control If you make a mistake, your production site is affected Developing remotely is slower than local development, especially when tweaking css files. I understand why WordPress works like this - it allows people with no development experience to manage their WordPress installation (or the one provided by their service provider). It also allows you to work on the WordPress installation without having ssh access to the server. However as I am confortable working with tools like git and ssh, and am using a virtual server for the blog, this isn't very important to me. So I was wondering what techniques experienced developers use when working on a WordPress blog. For example: Do you develop locally, then push the changes to the live site? How do you do this? How do you manage database changes and backups? What do you store under version control (if anything)? If a plugin changes the database, do you somehow track the changes it does in version control, so you can rollback the changes done by the plugin if you need to? Or maybe I'm just overcomplicating everything if working on the production site isn't as risky as I am thinking it would be. I would appreciate any answers either way.

    Read the article

  • Facebook Like javascript related to Time Spent Downloading a page Increase in GWT?

    - by donaldthe
    Hi, I installed the Facebook Like button Javascript version on my website on December 15th. Take a look at this report from Google Webmaster Central. Crawl stats Googlebot activity in the last 90 days The crawl stats are from Googlebot which as far as I know doesn't execute Javascript. Could the Facebook Like Javascript code, "The XFBML version" be related to large spike in Time spent downloading a page? (By the way the huge spike in November was caused by a mistake where every image request was getting a 301.) I'm not sure what caused the spike to go down by half somewhere in December. It may have been related to a faulty setting in web.config. I'm at a loss as to what I can do about this or even how to tell if this is my problem or Googlebots crawl problem. Here is the Facebook code I am using to create the like button. It is right after the opening body tag <div id="fb-root"></div> <script> window.fbAsyncInit = function() { FB.init({appId: 'xxxxx', status: true, cookie: true, xfbml: true}); }; (function() { var e = document.createElement('script'); e.async = true; e.src = document.location.protocol + '//connect.facebook.net/en_US/all.js'; document.getElementById('fb-root').appendChild(e); }()); ` and this creates the like box: <fb:like show_faces="false"></fb:like> If the Javascript can't be the problem any ideas on where to start looking would be appreciated.

    Read the article

  • configuration issue with respect to .htaccess file on ubuntu

    - by Registered User
    I am building an application tshirtshop I have following configuration in /etc/apache2/sites-enabled/tshirtshop <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot /var/www/tshirtshop <Directory /var/www/tshirtshop> Options Indexes FollowSymLinks AllowOverride All Order allow,deny allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined </VirtualHost> and following in .htaccess file in location /var/www/tshirtshop/.htaccess <IfModule mod_rewrite.c> # Enable mod_rewrite RewriteEngine On # Specify the folder in which the application resides. # Use / if the application is in the root. RewriteBase /tshirtshop #RewriteBase / # Rewrite to correct domain to avoid canonicalization problems # RewriteCond %{HTTP_HOST} !^www\.example\.com # RewriteRule ^(.*)$ http://www.example.com/$1 [R=301,L] # Rewrite URLs ending in /index.php or /index.html to / RewriteCond %{THE_REQUEST} ^GET\ .*/index\.(php|html?)\ HTTP RewriteRule ^(.*)index\.(php|html?)$ $1 [R=301,L] # Rewrite category pages RewriteRule ^.*-d([0-9]+)/.*-c([0-9]+)/page-([0-9]+)/?$ index.php?DepartmentId=$1&CategoryId=$2&Page=$3 [L] RewriteRule ^.*-d([0-9]+)/.*-c([0-9]+)/?$ index.php?DepartmentId=$1&CategoryId=$2 [L] # Rewrite department pages RewriteRule ^.*-d([0-9]+)/page-([0-9]+)/?$ index.php?DepartmentId=$1&Page=$2 [L] RewriteRule ^.*-d([0-9]+)/?$ index.php?DepartmentId=$1 [L] # Rewrite subpages of the home page RewriteRule ^page-([0-9]+)/?$ index.php?Page=$1 [L] # Rewrite product details pages RewriteRule ^.*-p([0-9]+)/?$ index.php?ProductId=$1 [L] </IfModule> the site is working on localhost and is working as if there is no .htaccess rule specified i.e. if I were to view a page as http://localhost/tshirtshop/nature-d2 then I get a 404 Error but if I view the same page as http://localhost/tshirtshop/index.php?DepartmentId=2 then I can view it. sudo apache2ctl -M Loaded Modules: core_module (static) log_config_module (static) logio_module (static) mpm_prefork_module (static) http_module (static) so_module (static) alias_module (shared) auth_basic_module (shared) authn_file_module (shared) authz_default_module (shared) authz_groupfile_module (shared) authz_host_module (shared) authz_user_module (shared) autoindex_module (shared) cgi_module (shared) deflate_module (shared) dir_module (shared) env_module (shared) mime_module (shared) negotiation_module (shared) php5_module (shared) reqtimeout_module (shared) rewrite_module (shared) setenvif_module (shared) status_module (shared) Syntax OK What is the mistake if any one can point out in above configuration, or else I need to check any thing else?

    Read the article

  • Is it common to only pay developers for the time they said a project would take?

    - by BAM
    I work at a small startup (<10 people), and I was recently assigned (along with one other developer) to a relatively small project. The project involved moving an existing iOS app to Android. The client told us they had built the app for iOS in 300 man-hours. Not knowing at the time that this figure was completely false, we naively and optimistically assumed that if they could build the app from scratch in that amount of time, we could easily "port" it in a similar amount of time. Therefore, we drafted up a fixed-price contract based on 350 man-hours, with a 5 week deadline. (We are well aware now of how big of a mistake this was... Never let the client tell you how long it's going to take!) Anyway, by week 4 we had already surpassed our 350 hours, and we estimated that there were at least 2 more weeks left on the project. We were told to continue working, but that the company could not afford to pay out on overdue projects anymore. I thought this just meant "be more careful about estimates in the future". However a few weeks later, the company president informed us that we would not be getting paid for any time past 350 man-hours. We argued over the issue for almost an hour. He claimed, however, that this is standard practice for many organizations, and that I was unreasonable for making a big deal out of it. So is this really a common thing, or am I justified in being upset about it? Thanks in advance for any advice!

    Read the article

  • How to convince a client to switch to a framework *now*; also examples of great, large-scale php applications.

    - by cbrandolino
    Hi everybody. I'm about to start working on a very ambitious project that, in my opinion, has some great potential for what concerns the basic concept and the implementation ideas (implementation as in how this ideas will be implemented, not as in programming). The state of the code right now is unluckily subpar. It's vanilla php, no framework, no separation between application and visualization logic. It's been done mostly by amateur students (I know great amateur/student programmers, don't get me wrong: this was not the case though). The clients are really great, and they know the system won't scale and needs a redesign. The problem is, they would like to launch a beta ASAP and then think of rebuilding. Since just the basic functionalities are present now, I suggested it would be a great idea if we (we're a three-people shop, all very proficient) ported that code to some framework (we like CodeIgniter) before launching. We would reasonably be able to do that in < 10 days. Problem is, they don't think php would be a valid long-term solution anyway, so they would prefer to just let it be and fix the bugs for now (there's quite a bit) and then directly switch to some ruby/python based system. Porting to CI now will make future improvements incredibly easier, the current code more secure, changing the style - still being discussed with the designers - a breeze (reminder: there are database calls in template files right now); the biggest obstacle is the lack of trust in php as a valid, scalable technology. So well, I need some examples of great php applications (apart from facebook) and some suggestions on how to try to convince them to port soon. Again, they're great people - it's not like they would like ruby cause it's so hot right now; they just don't trust php since us cool programmers like bashing it, I suppose, but I'm sure going on like this for even one more day would be a mistake. Also, we have some weight in the decision process.

    Read the article

  • "Programming error" exceptions - Is my approach sound?

    - by Medo42
    I am currently trying to improve my use of exceptions, and found the important distinction between exceptions that signify programming errors (e.g. someone passed null as argument, or called a method on an object after it was disposed) and those that signify a failure in the operation that is not the caller's fault (e.g. an I/O exception). As far as I understand, it makes little sense for an immediate caller to actually handle programming error exceptions, he should instead assure that the preconditions are met. Only "outer" exception handlers at task boundaries should catch them, so they can keep the system running if a task fails. In order to ensure that client code can cleanly catch "failure" exceptions without catching error exceptions by mistake, I create my own exception classes for all failure exceptions now, and document them in the methods that throw them. I would make them checked exceptions in Java. Now I have a few questions: Before, I tried to document all exceptions that a method could throw, but that sometimes creates an unwiedly list that needs to be documented in every method up the call chain until you can show that the error won't happen. Instead, I document the preconditions in the summary / parameter descriptions and don't even mention what happens if they are not met. The idea is that people should not try to catch these exceptions explicitly anyway, so there is no need to document their types. Would you agree that this is enough? Going further, do you think all preconditions even need to be documented for every method? For example, calling methods in IDisposable objects after calling Dispose is an error, but since IDisposable is such a widely used interface, can I just assume a programmer will know this? A similar case is with reference type parameters where passing null makes no conceivable sense: Should I document "non-null" anyway? IMO, documentation should only cover things that are not obvious, but I am not sure where "obvious" ends.

    Read the article

  • Interaction between two Clouds

    - by user7969
    I have setup the Cloud-A with 1 - [CLC+CC] and 2 - [NC] computers. I have another Cloud-B with same configuration. [using the Ubuntu Enterprise Cloud] Both of them working fine individually, in the same LAN. Now if I want to add the NC of Cloud-A to CC of Cloud-B, [in case the resources of Cloud-B are exhausted] how can I make it possible ? I guess this calls for the interoperability stuff... Could you please explain what happens exactly when we ask for instance, the direct interaction happens between the client and NC or it goes through the CLC and CC ? What I want to say is, say there are multiple cloud providers. A user is subscribed to any one of them, say Cloud-A for IaaS. As the requirements are dynamic, all the resources of Cloud-A may get exhausted. There may be another Cloud-B which can provide the services but that Cloud-A can't ask the client to go for Cloud-B. So if it is possible to have some co-ordination between this two providers to share resources mutually, making client fully unaware of whats going on in the background....? Please reply.. I am sorry if I'm doing mistake anywhere... Thanks in advance :)

    Read the article

  • Google search question, front page not showing

    - by user5746
    I know this is probably a dumb question but I hope someone can give me some insight; I was ranked on Google first page of search results for "funny st patricks day shirts" but I was third from the bottom and not familiar enough with SEO, so I signed up for "Attracta" to rank higher. Big mistake. Since using Attracta, I've lost the first page and I'm now on the fourth page in that search. What I noticed is that Google is now just showing a sub-page or side page, (a link from my front page, to a page which has only a few designs in it) this is not where I would want customers to land first... but my front page is not showing in that search anymore. Obviously, the title of this side page is not geared toward that search result, so I know that's why I have the pr drop. Why is my front page not ranking over that page, though? Why is it apparently gone from that search, or so far back no one will ever find it? I need to know how to fix this quick if anyone has any advice at all for me. It's the busiest season for my website and the people who were stealing design ideas from me are all ranked higher than my site now. (I can prove this, lol) So, I'm very frustrated by that. I would be very grateful to have any advice at all as to what I can do to fix this. THANKS in advance for any advice you can offer. Catelyn

    Read the article

  • How do I rescue files from the encrypted home folder via live USB stick?

    - by Alexia
    I know, this has been asked and answered all over the internet already. However, I start feeling stupid, since the informations there are not helping me. Just this morning, I wanted to install the newest update to 13.10. After the download, when it came to the actual installing, the install program froze and didn't do anything for hours. At that time, I was still logged in. The computer was working and everything was accessable to me. However, I made the mistake and didn't immediately make safety copies of everything. Instead, I just rebooted. Long story short: My computer even fails to reset to a previous version via Grub. But I am able to boot from a USB stick and, after starting Nautilus, I see my home folder on the HD. I would now like to copy its contents onto an external harddisk. Problem 1: I have no rights to access the folder like that. Problem 2: It is encrypted. Problem 3: I don't know how to give myself the rights to access the folder nor do I know how to encrypt it. I assume that it might help that I still know these things: - my old login name - my old login phrase - a 32 characters long string of hexadecimal numbers that I copied to my list of passwords as "Ubuntu Encryption Code". I copied it digitally right after installing Ubuntu the first time and encrypting the home folder, so there won't be any typos. I am sure of that. The solutions that I saw so far, tell me that I need the "encryption phrase". But when I follow the instructions and use this phrase that I have in my list, I only get messages of denial. Can anyone help me through this special problem, please?

    Read the article

  • Are null references really a bad thing?

    - by Tim Goodman
    I've heard it said that the inclusion of null references in programming languages is the "billion dollar mistake". But why? Sure, they can cause NullReferenceExceptions, but so what? Any element of the language can be a source of errors if used improperly. And what's the alternative? I suppose instead of saying this: Customer c = Customer.GetByLastName("Goodman"); // returns null if not found if (c != null) { Console.WriteLine(c.FirstName + " " + c.LastName + " is awesome!"); } else { Console.WriteLine("There was no customer named Goodman. How lame!"); } You could say this: if (Customer.ExistsWithLastName("Goodman")) { Customer c = Customer.GetByLastName("Goodman") // throws error if not found Console.WriteLine(c.FirstName + " " + c.LastName + " is awesome!"); } else { Console.WriteLine("There was no customer named Goodman. How lame!"); } But how is that better? Either way, if you forget to check that the customer exists, you get an exception. I suppose that a CustomerNotFoundException is a bit easier to debug than a NullReferenceException by virtue of being more descriptive. Is that all there is to it?

    Read the article

  • Windows Defender Update KB915597 (Definition 1.135.415.0)? Killed My Live Discs

    - by user88311
    Here's my problem, for those willing to read for about 2 minutes here's the entire story, http://answers.microsoft.com/en-us/windows/forum/windows_vista-windows_update/bsod-after-windows-defender-update-kb915597/a4b5fca3-0274-47b4-97c4-61b34c4c4599, for those who want the short version here's what happened. After windows update automatically updated windows defender to kb915597, my computer starting getting bsods on shut down and start up and started experiencing problems withe the usb ports. So I decided to go to the microsoft answers site for help, I know that was probably my first mistake, so I followed their advice and they turned my computer into a large paper weight. Luckily I make physical backups of my c drive every few months and I have one from back in july, so I figured I'd boot up a ubuntu live disc, copy all my files from the past 2 months to a external drive and just copy the backup back to the c drive, that's where I ran into this problem. When I put in either a ubuntu or kubuntu disc, everything goes well, until it finishes the loading bar then when the OS would presumably start up, the computer resets, I've tried with ubuntu, kubuntu and gparted, and only gparted is able to get to the point where it starts up, but even then when I try to access the internet from it, the computer resets, and when I tried to copy the entire C drive partition to a blank external I wasn't able to. So I figured somehow maybe the C drive had something to do with it, so I unplugged the C drive so my computer was just a 2.8 ghz processor and 2 gigs of ram, should have had no problem starting a live disc but the problem still continues. After doing some googling around I've found whenever windows gets a update with the title KB915597 it's pretty much the kill switch for windows, I've tried contacting microsoft tech support and even managed to directly contact a software engineer but as soon as I mention KB915597 they all just blew me off. I hope anybody who reads this has any idea how to fix this, I'm going to attempt to install ubuntu or kubuntu to a external drive using the same computer and see what happens now.

    Read the article

  • I just recursively chmod'd everything under / to 750. Any tips?

    - by Ouairz
    I won't be the first and I won't be the last, I suppose. While playing around with the find command, I made a whoops and it would appear that instead of changing the permissions of the ~/web directory to 750, it changed the permissions of the entire filesystem (/) to 750, however I'm not certain, but any attempt to investigate is thwarted by Permission denied messages. For everything. This was the offending command: sudo find ~/web . type d -exec chmod 750 {} If I'm not mistaken, the Ubuntu team disabled root logins as a safety precaution so I'm out of ideas. I'm (obviously) a total newbie when it comes to file permissions so I was wondering if anyone had some good or even some bad advice to share. I've mentally prepped myself to losing everything on the computer which is only of mild consequence, since I have backups, but I did do a bit of work on this box over the week and it would be a shame to lose it all due to a boneheaded mistake. If you are reading this message, ask yourself, have you backed up any of your work recently? Thanks in advance for any insights. Feel free to scold me for using sudo carelessly

    Read the article

  • Are Java's public fields just a tragic historical design flaw at this point?

    - by Avi Flax
    It seems to be Java orthodoxy at this point that one should basically never use public fields for object state. (I don't necessarily agree, but that's not relevant to my question.) Given that, would it be right to say that from where we are today, it's clear that Java's public fields were a mistake/flaw of the language design? Or is there a rational argument that they're a useful and important part of the language, even today? Thanks! Update: I know about the more elegant approaches, such as in C#, Python, Groovy, etc. I'm not directly looking for those examples. I'm really just wondering if there's still someone deep in a bunker, muttering about how wonderful public fields really are, and how the masses are all just sheep, etc. Update 2: Clearly static final public fields are the standard way to create public constants. I was referring more to using public fields for object state (even immutable state). I'm thinking that it does seem like a design flaw that one should use public fields for constants, but not for state… a language's rules should be enforced naturally, by syntax, not by guidelines.

    Read the article

  • Facebook - Isn't this a big vulnerability risk for users? (After Password Change)

    - by Trufa
    I would like to know you opinions as programmers / developers. When I changed my Facebook password yesterday, by mistake I entered the old one and got this: Am I missing something here or this is a big potencial risk for users. In my opinion this is a problem BECAUSE it is FaceBook and is used by, well, everyone and the latest statistics show that 76.3% of the users are idiots [source:me], that is more that 3/4!! All kidding aside: Isn't this useful information for an attacker? It reveals private information about the user! It could help the attacker gain access to another site in which the user used the same password Granted, you should't use use the same password twice (but remember: 76.3%!!!) Doesn't this simply increase the surface area for attackers? It increases the chances of getting useful information at least. In a site like Facebook 1st choice for hackers and (bad) people interested in valued personal information shouldn't anything increasing the chance of a vulnerability be removed? Am I missing something? Am I being paranoid? Will 76.3% of the accounts will be hacked after this post? Thanks in advance!! BTW if you want to try it out, a dummy account: user: [email protected] (old) password: hunter2

    Read the article

  • Why not commit unresolved changes?

    - by Explosion Pills
    In a traditional VCS, I can understand why you would not commit unresolved files because you could break the build. However, I don't understand why you shouldn't commit unresolved files in a DVCS (some of them will actually prevent you from committing the files). Instead, I think that your repository should be locked from pushing and pulling, but not committing. Being able to commit during the merging process has several advantages (as I see it): The actual merge changes are in history. If the merge was very large, you could make periodic commits. If you made a mistake, it would be much easier to roll back (without having to redo the entire merge). The files could remain flagged as unresolved until they were marked as resolved. This would prevent pushing/pulling. You could also potentially have a set of changesets act as the merge instead of just a single one. This would allow you to still use tools such as git rerere. So why is committing with unresolved files frowned upon/prevented? Is there any reason other than tradition?

    Read the article

  • windows 8 + Ubuntu dual boot

    - by Jack Yuan
    I installed Ubuntu 13.04 on Windows 8. Yes I can access both of them, but the process is kind of long. In BIOS, EFI is for Windows 8, legacy support is for Ubuntu. If I choose EFI first, the startup just go straight to Win8 without offering me a choice. If I choose legacy first, the starup will offer me a choice between win8 and ubuntu. But I can only choose Ubuntu. If i choose win8, there will be a mistake(file missing under configuration). That is to say, every time i wanna switch to another OS, I have to go into BIOS and change the priority settings. I heard something about secure boot might be the cause of this situation. But the thing is that there is not even an option called "secure boot" in my BIOS, which means i cannot disable it. All I want is that an option menu appears everytime i turn on my computer so i can easily choose what OS I want for today. Can anyone help me plz? Thank you very much!!

    Read the article

  • I am not the most logically-organized person. Do I have any chance at being a good 'low-level' programmer?

    - by user217902
    Background: I am entering college next year. I really enjoy making stuff and solving logical problems, so I'm thinking of majoring in compsci and working in software development. I hope to have the kind of job where I can work with implementing / improving algorithms and data structures on a regular basis.. as opposed to, say, a job that's purely concerned with mashing different libraries together, or 'finding the right APIs for the job'. (Hence the word 'low-level' in the title. No, I don't wish to write assembly all day.) Thing is, I've never been the most logically-sharp person. Thus far I have only worked on hobby projects, but I find that I make the silliest of errors ever so often, and it can take me ages to find it. Like anywhere between three hours to a day to locate a simple segfault, off-by-one error, or other logical mistake. (Of course, I do other things in the meantime, like browsing SO, reddit, and the like..) It's not like I'm 'new' to programming either; I first tried C++ maybe five years ago. My question is: is this normal? Should a programmer with any talent solve it in less time? Having read Spolsky's Smart and gets things done, where he talks about the large variance in programming speed, am I near the bottom of the curve, and therefore destined to work in companies that cannot afford to hire quality programmers? I'd like to think that conceptually I'm okay -- I can grasp algorithms and concepts pretty well, I do fine in math and science, although I probably drop signs in my equations more often than the next guy. Still, grokking concepts makes me happy, and is the reason why I want to work with algorithms. I'm hoping to hear from those of you with real-world programming experience. TL;DR: I make many careless mistakes, should I not consider programming as a career?

    Read the article

  • Remote Access to Owncloud Server

    - by John
    I'm currently trying to setup my own own-cloud server, and I've got it fully installed, configured, and accessible from within my own local network. I cannot figure out how to access it from the outside. So far I've: Successfully setup port-forwarding on my local router. I've done so via 'single port forwarding' and 'port range forwarding' Ports 80, 443, 3306 (Apache-Full and MySQL) Successfully obtained my external IP address. I've also tested this magic number from within the network at #insertIPhere/owncloud and it did work. Successfully setup the server using SQLite Successfully setup the server using MySQL Created the following exceptions in my firewall: Allow In Port 80 (Apache Full) Allow In Port 443 (Apache Full) Allow In Port 3306 (MySQL) Tried connecting from several different remote networks, as to troubleshoot something on their end As far as trying to access it, I'm doing so through Google-Chrome and Mozilla Firefox trying to reach the server through #insertIPhere/owncloud using the above public IP address. So what have I missed, and how do I access my server from outside? Thanks in advance for your help and time, and I apologize in advance for what will probably result in my noobish mistake in networking. I've looked at the official documentation. And also this question here.

    Read the article

  • I just received a complaint from a user of the website I maintain. Should I do anything?

    - by Chris
    I was sent sent a large wall of text from a user of the website I maintain at my job. They are clearly upset for having to deal with a horribly outdated web application that has not seen any serious updates in over 6+ years. No refactoring has been done, the code quality is terrible, the security unchecked, policy compliances ignored, in addition to being ugly and frankly embarrassing. Keep in mind this is a small business but the website is used by hundreds daily. I'm one of two programmers there, and I've been working there for two years. This person says they are about my age (22) and understand technology (but can't use proper grammar). The complaint mentioned awkward pages and actions on the website, but they don't even have a clue as to the depth of the flaws in this website. Now, I would love to honestly tell them that there's a lot wrong with this company and that this application was built when we were in high school. And that while it's not my fault that the website is terrible, I'm the one in position to fix it. But on the other hand, I could just say nothing and ignore it. Would doing this publicly have any advantage to future employees (showing integrity) or would it just be a completely pointless mistake? Odds are, even if I respond only that one person will ever read it. Regardless, I'm probably just going to ignore it and continue starting my project to refactor the website.

    Read the article

  • Problems to boot, Ubuntu entry does not work anymore

    - by user104108
    A few months I decided to install Ubuntu 12.04 on my PC alongside with my Windows 7 partition. In order to do that and avoid any mistake, I followed these steps: http://www.linuxbsdos.com/2012/05/17/how-to-dual-boot-ubuntu-12-04-and-windows-7/2/ Everything was going well until I decided to update to the 12.10 realese. I don't know what happened, but after I updated my Ubuntu, it stoped working, it didn't even launched, when I turned on my pc and choose to run "Ubuntu 12.04" on the Grub Screen, a weird messaged appeared. Well, so I decided to install the Ubuntu 12.10 and forget about the 12.04 partition, no problem. I erased the partitions used for the Ubuntu 12.04 with EaseUS partition Manager. However, when I start my PC, there is still the option of "Ubuntu 12.04" to chose, is that bad? And what about now, can I use the Windows Installer of Ubuntu ( http://www.ubuntu.com/download/help/install-ubuntu-with-windows ) to install the Ubuntu 12.10 ? What should I do to have Ubuntu 12.10 and Windows 7 in dual boot again? Thanks; Thales.

    Read the article

  • ERROR #342: DEVICE_SHADER_LINKAGE_SEMANTICNAME_NOT_FOUND

    - by Telanor
    I've stared at this for at least half an hour now and I cannot figure out what directx is complaining about. I know this error normally means you put float3 instead of a float4 or something like that, but I've checked over and over and as far as I can tell, everything matches. This is the full error message: D3D11: ERROR: ID3D11DeviceContext::DrawIndexed: Input Assembler - Vertex Shader linkage error: Signatures between stages are incompatible. The input stage requires Semantic/Index (COLOR,0) as input, but it is not provided by the output stage. [ EXECUTION ERROR #342: DEVICE_SHADER_LINKAGE_SEMANTICNAME_NOT_FOUND ] This is the vertex shader's input signature as seen in PIX: // Input signature: // // Name Index Mask Register SysValue Format Used // -------------------- ----- ------ -------- -------- ------ ------ // POSITION 0 xyz 0 NONE float xyz // NORMAL 0 xyz 1 NONE float // COLOR 0 xyzw 2 NONE float The HLSL structure looks like this: struct VertexShaderInput { float3 Position : POSITION0; float3 Normal : NORMAL0; float4 Color: COLOR0; }; The input layout, from PIX, is: The C# structure holding the data looks like this: [StructLayout(LayoutKind.Sequential)] public struct PositionColored { public static int SizeInBytes = Marshal.SizeOf(typeof(PositionColored)); public static InputElement[] InputElements = new[] { new InputElement("POSITION", 0, Format.R32G32B32_Float, 0), new InputElement("NORMAL", 0, Format.R32G32B32_Float, 0), new InputElement("COLOR", 0, Format.R32G32B32A32_Float, 0) }; Vector3 position; Vector3 normal; Vector4 color; #region Properties ... #endregion public PositionColored(Vector3 position, Vector3 normal, Vector4 color) { this.position = position; this.normal = normal; this.color = color; } public override string ToString() { StringBuilder sb = new StringBuilder(base.ToString()); sb.Append(" Position="); sb.Append(position); sb.Append(" Color="); sb.Append(Color); return sb.ToString(); } } SizeInBytes comes out to 40, which is correct (4*3 + 4*3 + 4*4 = 40). Can anyone find where the mistake is?

    Read the article

  • On installing nvidia drivers on 12.10 I get "Bad return status for module build on kernel: 3.5.0-19-generic (x86_64)"

    - by james
    New Ubuntu user - just recently made the mistake of trying a different nvidia driver. I'd managed to get the last (nvidia-current) one working through software sources a few weeks ago. The other day I tried to cross over to nvidia-experimental-310 and this produced a system error. Swapping back and forth between proprietary drivers now always causes an error and I can't get any of them to work. Installing through the terminal I get this error message every time: Building initial module for 3.5.0-19-generic Error! Bad return status for module build on kernel: 3.5.0-19-generic (x86_64) Consult /var/lib/dkms/nvidia-experimental-310/310.14/build/make.log for more information On rebooting, I end up with the crappy screen resolution and the thick black border around the screen. I use gksudo software-properties-gtk to bring up sources, where I can change back to the nouveau driver, which restores my screen. After that I can't find /var/lib/dkms/nvidia-experimental-310/310.14/build/make.log so I can't tell you what's inside. Any ideas what might be preventing the nvidia driver from installing? SOLUTION FOUND Okay - so I have a workaround. This is what has worked: Upgrade to kernel 3.7.0 as detailed here upgrade to latest version of the nvidia drivers as detailed here No idea what was happening with kernel 3.5.0-19, but this seems to be better. A little slower maybe on boot, but after days of messing around it's nice to have something that works.

    Read the article

  • two guitexture that do not work together

    - by London2423
    I have two GUITexture that move left and right a cube. Is pretty strange but together they don't work. If I activate only one it works. To be more specific: If I have the left GUItexture alone in the game the cube move left. If I have the right GUITexture activated alone the cube move right. Seems all fine I thought but If I have both of them the cube move only right and not left. Where is the mistake? Here is the code inside the GameObject cube for Right move void OnMousedown () { transform.position += Vector3.right * Time.deltaTime; } For Left move void OnMousedown () { transform.position += Vector3.left * Time.deltaTime; } And this is the left GUITexture code //move the cube left Cube.GetComponent<Left> ().enabled = true; left.transform.position += Vector3.left * Time.deltaTime; This is the right GUITexture //move the cube right Cube.GetComponent<Left> ().enabled = true; right.transform.position += Vector3.right * Time.deltaTime; What is the reason for this? I hope someone can help me.

    Read the article

  • How to learn to deliver quality software designs when working on a tight deadline?

    - by chester89
    I read many books about how to design great software, but I kind of struggle to come up with a good design decisions when it comes to business apps, especially when the timeframe is tough. In the company I currently work for, the following situation happen all the time: my teamlead tells me that there's a task to do, I call some guy or a girl from business who tells me exactly what is it they want, and then I start coding. The task always fits in some existing application (we do only web apps or web services), usually it's purpose is to pull data from one datasource and put into the other one, with some business logic attached in the process. I start coding and then, after spending some time on a problem, my code didn't work as expected - either because of technical mistake or my lack of knowledge of the domain. The business is ringing me 2-3 times a day to hurry me up. I ask my team lead to help, he comes up, sees my code and goes like 'What's this?'. Then he throws away about half of my code, including all the design decisions I made, writes 2-3 methods that does the job (each of them usually 200-300 lines long or more, by the way), and task is complete, code works as it should have. The guy is smarter than me, obviously, and I'm aware of that. My goal is to be better software developer, that means write better code, not finish the job quicker with some crappy code. And the thing is, when I have enough time to tackle a problem, I can come up with a design that is good (in my opinion, of course), but I fall short to do so when I'm on a tight deadline. What should I do? I am fully aware that it's rather vague explanation, but please bear with me

    Read the article

  • Bikeshedding: Placeholders in strings

    - by dotancohen
    I find that I sometimes use placeholders in strings, like this: $ cat example-apache <VirtualHost *:80> ServerName ##DOMAIN_NAME## ServerAlias www.##DOMAIN_NAME## DocumentRoot /var/www/##DOMAIN_NAME##/public_html </VirtualHost> Now I am sure that it is a minor issue if the placeholder is ##DOMAIN_NAME##, !!DOMAIN_NAME!!, {{DOMAIN_NAME}}, or some other variant. However, I now need to standardize with other developers on a project, and we all have a vested interest in having our own placeholder format made standard in the organization. Are there any good reasons for choosing any of these, or others? I am trying to quantify these considerations: Aesthetics and usability. For example, __dict__ may be hard to read as we don't know how many underscores are in there. Compatibility. Will some language try to do something funny with {} syntax in a string (such as PHP does with "Welcome to {$siteName} today!")? Actually, I know that PHP and Python won't, but others? Will a C++ preprocessor choke on ## format? If I need to store the value in some SQL engine, will it not consider something a comment? Any other pitfalls to be wary of? Maintainability. Will the new guy mistake ##SOME_PLACEHOLDER## as a language construct? The unknown. Surely the wise folk here will think of other aspects of this decision that I have not thought of. I might be bikeshedding this, but if there are real issues that might be lurking then I would certainly like to know about them before mandating that our developers adhere to a potentially-problematic convention.

    Read the article

< Previous Page | 23 24 25 26 27 28 29 30 31 32 33 34  | Next Page >