Search Results

Search found 25400 results on 1016 pages for 'enable manual correct'.

Page 360/1016 | < Previous Page | 356 357 358 359 360 361 362 363 364 365 366 367  | Next Page >

  • Calendar icon in unity shows wrong date

    - by felix
    There is a column of icons in unity on the far left and one of them has the number "31" in big numerals. If you mouse over it it says "Google Calendar" and if you click on it you get google calendar which of course shows the correct date. Today is November 3. Shouldn't it say "3" instead? If I type "date" from the command line I get Sun Nov 3 21:19:37 GMT 2013 I see this was fixed for Chrome at least in 2011 http://gmailblog.blogspot.co.uk/2011/04/5-years-of-google-calendar-and-new.html .

    Read the article

  • How to configure Google sitemap links in Wordpress? (without editing its HTML or PHP source code) [duplicate]

    - by Alexander Farber
    This question already has an answer here: What are the most important things I need to do to encourage Google Sitelinks? 5 answers I run a Wordpress 3.7.1–de_DE site, but don't have much experience with it yet. When my site comes up in a Google search, there are 2 links displayed underneath: I believe these links are called "Google sitemap" and my question is how to configure them in Wordpress. Because while the right link is pointing to the /ueber-mich URL at the website, the left link was pointing to an non-existing /imprint and I had to add that webpage as a workaround for now. And I'd like to change the /imprint to German /impressum anyway (currently I use mod_rewrite to redirect). UPDATE: Dear downvoters and movers, would you mind to READ my question please? My question has been about how to configure Google sitemap links in Wordpress. So it is NOT A DUPLICATE (I do not want to edit the HTML code, I want to find the correct configuration in Wordexpress) and my question SHOULDN'T HAVE BEEN MOVED AWAY from wordexpress.stackexchange.com.

    Read the article

  • Get and set accessors do they protect different instances of a variable?

    - by Chris Halcrow
    The standard method of implementing get and set accessors in C# and VB.NET is to use a public property to set and retrieve the value of a corresponding private variable. Am I right in saying that this has no effect of different instances of a variable? By this I mean, if there are different instantiations of an object, then those instances and their properties are completely independent right? So I think my understanding is correct that setting a private variable is just a construct to be able to implement the get and set pattern? Never been 100% sure about this.

    Read the article

  • What is the difference between being an IT in investment bank and a professional IT company?

    - by deepsky
    Suppose there are two positions: IT in investment bank: developer for the infrastructure or the platform a famous IT company: embedded developer, linux As far as I understand, since in the investment bank not everyone will have the chance to work for the core trading system, most people just do the same job as they do in a normal IT company. And some of the tasks can even be outsourced. But in a professional IT company, you will have more chance to practice your coding skill and enhance your professional knowledge. So there are many choices when you want to change your job while the IT in invest bank not. Is this correct?

    Read the article

  • file:///cdrom/pool/main/k/klibc/klibc-utils_1.5.25-1ubuntu2_amd64.deb was corrupt

    - by curlyreggie
    I guess this is trivial and most commonly asked question, but I'd reiterate it again here as I'm not able to find a correct solution. I'm trying to install Ubuntu Cloud setup on VMWare using the package from http://download.ubuntu.com and have this basic installation setup issue as per the below image. file:///cdrom/pool/main/k/klibc/klibc-utils_1.5.25-1ubuntu2_amd64.deb was corrupt The issue is I cannot continue by skipping this as you know this happens to be the most important setup. How can I fix this? Help is sincerely appreciated.

    Read the article

  • Radeon HD4850 card, ubuntu 12.04.1 LTS Installed FGLRX drivers, but still running vesa, how to solve it?

    - by user113416
    I have ati HD4850 card, ubuntu 12.04.1LTS. To begin with, after fresh instll, i installed fglrx drvers from system -- additional drivers, according to fglrxifo everything was alright, but the system was running on vesa:sem. After that, i have reinstalled and installed drivers according to many tutorials, but stille got that problem. one of the tutorial was here: What is the correct way to install ATI Catalyst Video Drivers (fglrx)? of course, i attempted to install 12.6 driver Now i have a fresh install of ubuntu and don't want to touch anything without anyone's support, because my 3 days nightmare didn't give results. what must i do to have an adequate performance of video card? Thanks in advace.

    Read the article

  • Samsung NC10 Broadcom difficulties

    - by simonp
    I am new to Ubuntu/Linux and am stuck already! Any help much appreciated. I have managed to install Ubuntu Maverick 10.10 (from a USB flash drive) on my NC10 which also has Windows 7 on the partitioned drive. But the wireless internet is not working. I have identified the hardware as a Broadcom BCM4313. I have also managed to find that the correct driver is installed (Modaliases for Broadcom 802.11 Linux STA driver). I have followed advice from elsewhere and there does not seem to be any competition from other drivers. I am now stuck and do not have any other internet access on this netbook. Any ideas?

    Read the article

  • Error after installing Ubuntu 12.04 using Wubi

    - by KJ50
    After using Windows Ubuntu Installer from within Windows, I am prompted to restart, so I follow the directions. When I try to start Ubuntu after restarting, the desktop background appears, but then a loading bar with this title appears. Verifying the installation configuration... While this is loading, an error window pops up that says No root file system is defined Please correct this from the partitioning menu There is only an 'Ok' button available to click, and if I click that the same error window appears. I do not know how to get to the "partitioning menu" from this state, so the only option I have is to shut down my computer. What can I do so that Ubuntu finds a "root file system"? Can I diagnose this problem via Windows? Does anyone have any insight? FYI - I am using a new ultrabook with 6GB RAM, Intel i7 3rd gen processor, and no CD/DVD drive.

    Read the article

  • How do I install VirtualBox 4.1?

    - by William
    How to install virtualbox-4.1.4 in ubuntu 11.04 fluently? when apt-get install libqt* unmet dependency. there is a long list of unmet dependency. Where start first and any command to install virtualbox fluently? You might want to run 'apt-get -f install' to correct these: The following packages have unmet dependencies: virtualbox-4.1 : Depends: libcurl3 (>= 7.16.2-1) but it is not going to be installed Depends: libqt4-network (>= 4:4.5.3) but it is not going to be installed Depends: libqt4-opengl (>= 4:4.7.0~rc1) but it is not going to be installed Depends: libqtcore4 (>= 4:4.7.0~beta1) but it is not going to be installed Depends: libqtgui4 (>= 4:4.7.0~beta1) but it is not going to be installed Recommends: libsdl-ttf2.0-0 but it is not going to be installed Recommends: dkms but it is not going to be installed Recommends: libhal1 (>= 0.5) but it is not going to be installed E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution).

    Read the article

  • What are some concise and comprehensive introductory guide to unit testing for a self-taught programmer [closed]

    - by Superbest
    I don't have much formal training in programming and I have learned most things by looking up solutions on the internet to practical problems I have. There are some areas which I think would be valuable to learn, but which ended up both being difficult to learn and easy to avoid learning for a self-taught programmer. Unit testing is one of them. Specifically, I am interested in tests in and for C#/.NET applications using Microsoft.VisualStudio.TestTools in Visual Studio 2010 and/or 2012, but I really want a good introduction to the principles so language and IDE shouldn't matter much. At this time I'm interested in relatively trivial tests for small or medium sized programs (development time of weeks or months and mostly just myself developing). I don't necessarily intend to do test-driven development (I am aware that some say unit testing alone is supposed to be for developing features in TDD, and not an assurance that there are no bugs in the software, but unit testing is often the only kind of testing for which I have resources). I have found this tutorial which I feel gave me a decent idea of what unit tests and TDD looks like, but in trying to apply these ideas to my own projects, I often get confused by questions I can't answer and don't know how to answer, such as: What parts of my application and what sorts of things aren't necessarily worth testing? How fine grained should my tests be? Should they test every method and property separately, or work with a larger scope? What is a good naming convention for test methods? (since apparently the name of the method is the only way I will be able to tell from a glance at the test results table what works in my program and what doesn't) Is it bad to have many asserts in one test method? Since apparently VS2012 reports only that "an Assert.IsTrue failed within method MyTestMethod", and if MyTestMethod has 10 Assert.IsTrue statements, it will be irritating to figure out why a test is failing. If a lot of the functionality deals with writing and reading data to/from the disk in a not-exactly trivial fashion, how do I test that? If I provide a bunch of files as input by placing them in the program's directory, do I have to copy those files to the test project's bin/Debug folder now? If my program works with a large body of data and execution takes minutes or more, should my tests have it do the whole use all of the real data, a subset of it, or simulated data? If latter, how do I decide on the subset or how to simulate? Closely related to the previous point, if a class is such that its main operation happens in a state that is arrived to by the program after some involved operations (say, a class makes calculations on data derived from a few thousands of lines of code analyzing some raw data) how do I test just that class without inevitably ending up testing that class and all the other code that brings it to that state along with it? In general, what kind of approach should I use for test initialization? (hopefully that is the correct term, I mean preparing classes for testing by filling them in with appropriate data) How do I deal with private members? Do I just suck it up and assume that "not public = shouldn't be tested"? I have seen people suggest using private accessors and reflection, but these feel like clumsy and unsuited for regular use. Are these even good ideas? Is there anything like design patterns concerning testing specifically? I guess the main themes in what I'd like to learn more about are, (1) what are the overarching principles that should be followed (or at least considered) in every testing effort and (2) what are popular rules of thumb for writing tests. For example, at one point I recall hearing from someone that if a method is longer than 200 lines, it should be refactored - not a universally correct rule, but it has been quite helpful since I'd otherwise happily put hundreds of lines in single methods and then wonder why my code is so hard to read. Similarly I've found ReSharpers suggestions on member naming style and other things to be quite helpful in keeping my codebases sane. I see many resources both online and in print that talk about testing in the context of large applications (years of work, 10s of people or more). However, because I've never worked on such large projects, this context is very unfamiliar to me and makes the material difficult to follow and relate to my real world problems. Speaking of software development in general, advice given with the assumptions of large projects isn't always straightforward to apply to my own, smaller endeavors. Summary So my question is: What are some resources to learn about unit testing, for a hobbyist, self-taught programmer without much formal training? Ideally, I'm looking for a short and simple "bible of unit testing" which I can commit to memory, and then apply systematically by repeatedly asking myself "is this test following the bible of testing closely enough?" and then amending discrepancies if it doesn't.

    Read the article

  • How can I automate or script daily downloads for any new anti- virus databases, and then have the program scan my drive?

    - by Macgrimm
    Howdy all Super Users" I humbly ask if any Super User can direct this long time, gray haired Apple Tech in the right direction on this issue. I believe there probably are many ways to skin this cat. But I am looking to find simply the best, most unattended way to get it done. Any help will be greatly appreciated. also (I know there are much better softwares out there for the Mac so please don't go there! The politics of this company dictate which Anti virus we have to use) anyway without any further wait: basically I am trying to automate 2 very important functions of Mc'Afee anti-virus for Mac. First I want to automate the process of retrieving new virus definition files, and second I want to automate the process of scanning for viruses. It turns out that Using Mc'Afee Anti-Virus for the Mac are both manual functions. And they left up to the user (per user account) to perform. Depending on all of about 150 MAc users to perform these 2 tasks themselves is around 65% compliance. My question then is: If I wanted to use the command line such as (open /Applications/McAfee\ Security.app) It will open up the Security Console. But how can I make command Mc'Afee go out and grab the definition files and scan the computer? I have to admit I am at a crossroad and Macaltimers has set in. I would really appreciate it if any of you "Super ~ Users" can help me out with this MacAltimers loss of how to what to do. Thanks to All up Front Macgrimm

    Read the article

  • Architecting persistence (and other internal systems). Interfaces, composition, pure inheritance or centralization?

    - by Vandell
    Suppose that you need to implement persistence, I think that you're generally limited to four options (correct me if I'm wrong, please) Each persistant class: Should implement an interface (IPersistent) Contains a 'persist-me' object that is a specialized object (or class) that's made only to be used the class that contains it. Inherit from Persistent (a base class) Or you can create a gigantic class (or package) called Database and make your persistence logic there. What are the advantages and problems that can come from each of one? In a small (5kloc) and algorithmically (or organisationally) simple app what is probably the best option?

    Read the article

  • Problems with Net::FTP slowing down

    - by c0bra
    I'm running into an issue with using Net::FTP (latest version 2.77) to transfer files to a remote host where a process is waiting to take the file and feed into some other system. The remote process does this every 5 minutes but ignores files that were modified in the last 0.2 seconds (that's right, 1/5th of a second). The problem is that for some reason the transfer seems to halt or slow down a bit and no data is transferred for several seconds and during this time the process picks up the incomplete file and removes it. The weird thing is that manually using the ftp binary the file seems to transfer fine. I've tried messing with all of the Net::FTP switches (Active/Passive, different Blocksizes) and nothing seems to help. What's also weird is that the file seems to transfer fairly quickly at first and on occasion a bit into the transfer. Like 300-500k will go up immediately, but then it slows down to where the file size is only increasing by 2,896 bytes every several seconds. It doesn't seem to happen when I try sending the file to a different remote host, but since a regular manual ftp transfer works with this host I don't know what to think. Some combination of Net::FTP and possibly a slow or wonky connection?

    Read the article

  • Factory for arrays of objects in python

    - by Vorac
    Ok, the title might be a little misleading. I have a Window class that draws widgets inside itself in the constructor. The widgets are all of the same type. So I pass a list of dictionaries, which contain the parameters for each widget. This works quite nicely, but I am worried that the interface to callers is obfuscated. That is, in order to use Window, one has to study the class, construct a correct list of dictionaries, and then call the constructor with only one parameter - widgets_params. Is this good or bad design? What alternatives does the python syntax provide?

    Read the article

  • Pooling (Singleton) Objects Against Connection Pools

    - by kolossus
    Given the following scenario A canned enterprise application that maintains its own connection pool A homegrown client application to the enterprise app. This app is built using Spring framework, with the DAO pattern While I may have a simplistic view of this, I think the following line of thinking is sound: Having a fixed pool of DAO objects, holding on to connection objects from the pool. Clearly, the pool should be capable of scaling up (or down depending on need) and the connection objects must outnumber the DAOs by a healthy margin. Good Instantiating brand new DAOs for every request to access the enterprise app; each DAO will attempt to grab a connection from the pool and release it when it's done. Bad Since these are service objects, there will be no (mutable) state held by the objects (reduced risk of concurrency issues) I also think that with #1, there should be little to no resource contention, while in #2, there'll almost always be a DAO waiting to be serviced. Is my thinking correct and what could go wrong?

    Read the article

  • 2 Computers, same network, different outgoing speeds when uploading to internet?

    - by user117339
    I have 2 work machines in my office, a PowerMac G5 and a MacBook Air. Both behind an IPCop firewall. The PowerMac is connected through a gigabit switch, the MacBook Air is connected through a Netgear 802.11g access point that is then plugged into the gigabit switch. There is also a FreeNAS box, both machines are able to read and write files to it at close to their pipe speeds. The main problem is when I am trying to upload files to the internet at large. The G5 is only hitting 0.1 - 0.25 Mbps. The Macbook is able to hit 2-3 Mbps. The setup (G5 / IPCop / Network) has been the same for 5 years. The issues with the internet speed started about 3 months ago. I hadn't tested on the Macbook at this point. I had complained to the ISP, they said their modem needed a firmware update, did that nothing changed. Reset IPCop, turned off squid, etc. No changes. The ISP switched the office over to a better plan with a theoretical 6 Mbps up, still no change. At this point I tried testing the Macbook, and lo and behold there's the speed. But why? I have tried changing out everything, cables, switches, using another ethernet port on the G5, wiping the system, using DHCP, using manual IPs, changing DNS servers, etc. Nothing works. I figured that if there was something horribly wrong with the network, then internally I would find a similar issue, but that is perfect. iperf, ping, etc show no dropped packets and near saturation of the internal network. I'm at a loss as to what the heck is going on. Any ideas would be appreciated! Below are some screenshots of speedtest.net: G5: Macbook Air:

    Read the article

  • Installation of ANYTHING failed

    - by Nervosa
    I got an issue concerning Chrome. It launches now perfectly - still trying to install something else i see - You might want to run 'apt-get -f install' to correct these: The following packages have unmet dependencies: google-chrome-stable : Depends: lib32gcc1 (>= 1:4.1.1) but it is not installable Depends: lib32stdc++6 (>= 4.6) but it is not installable Depends: libc6-i386 (>= 2.11) but it is not installable E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). I remember i deleted some folder concerning chrome when it wasn't launchable - don't actually remember what directory exactly. Now - when i try this - "sudo apt-get -f update && sudo apt-get -f install" - i come across an error: sh: 0: getcwd() failed: No such file or directory Seems that deletion was fatal. Got any ideas? Thanks.

    Read the article

  • What should someone learn to become a great web-app builder by 2015

    - by rickdronkers
    My brother just started learning some html/css at school and he loves it. He asked me to give my advice on what languages to learn in order to build great web-apps by the time he leaves school. (2015 or something like that.) I did some research, and right now I want to tell him (in this particular order): HTML - HTML5 CSS - CSS3 Javascript - jQuery - AJAX PHP - Zend Framework MySQL Ruby - RoR I would like to know if this is correct before I waste his time. Stackoverflow seems like a place where people know the answer to these kind of questions ;). Thanks!

    Read the article

  • transfer Thunderbird (17) Profile on Win7 to Ubuntu 12.04

    - by William Curran
    I want to transfer Thunderbird Profile on Win7 to thunderbird (17) Ubuntu 12.04. I already copied the profile folder from Windows to Ubuntu and modified progile.ini on the ubuntu machine to include [Profile] Name=Bill IsRelative=1 Path=(the name of the transfered profile folder) I think the problem is that the Win. TB profile content (files and folder structure) look VERY different that the that of the unbuntu TB profile's that was created on installation. The Ubuntu install is new where as the Win TB has undergone many updates. Seems the system for profile storage has changed drastically. I tried to start TB in safe mode but could't get the path correct to start TB in the terminal with the -safe-mode switch. What can I do? Bill

    Read the article

  • How should I license code written for a startup without a contract?

    - by andijcr
    I wrote a fair amount of code for a startup, but I haven't signed a contract before doing so. The only document that I signed with them does not mention the fact that I have to pass the rights on the code to them, and after a consulting with a lawyer it seems that I own the full rights. Now I want to preemptively correct this situation by giving them some sort of exclusive license. Is there an existing license for closed-source, exclusive use that is used in these cases or I simply write somewhere "I grant exclusive license to use and modify this piece of code to FooBar-inc at the followings conditions: bla bla bla signed me, them"?

    Read the article

  • Could it be more efficient for systems in general to do away with Stacks and just use Heap for memory management?

    - by Dark Templar
    It seems to me that everything that can be done with a stack can be done with the heap, but not everything that can be done with the heap can be done with the stack. Is that correct? Then for simplicity's sake, and even if we do lose a little amount of performance with certain workloads, couldn't it be better to just go with one standard (ie, the heap)? Think of the trade-off between modularity and performance. I know that isn't the best way to describe this scenario, but in general it seems that simplicity of understanding and design could be a better option even if there is a potential for better performance.

    Read the article

  • What are the advantages of registered memory?

    - by odd parity
    I'm browsing for a few low-end servers for a startup and I'm a bit confused about the different memory types. The advantage of ECC is clear - single-bit error correction. When it comes to registered memory it seems more vague, especially in systems that support both registered and unbuffered memory. A Google search mostly finds copies of the Wikipedia article, which states that registered memory chips "...place less electrical load on the memory controller and allow single systems to remain stable with more memory modules than they would have otherwise". However I can't find any quantification of this. What I'm wondering about is: Is registered memory an improvement over unbuffered when it comes to soft error rate, or is it purely about the maximum number of modules supported? If yes, at what point (amount of modules or GB of memory) do these improvements start to become noticeable? For a specific example, the HP ProLiant DL 120 G6 server manual states that maximum supported memory configuration is 16 GB unbuffered (4x4GB) or 12 GB registered (6x2GB). In this case I'd rather have the extra 4GB of memory if the reliability difference is negligible.

    Read the article

  • How do I get started with fog type effects in a first person game?

    - by Dream Lane
    Hey guys, I'm currently using JME3 to learn 3d game development in java, and I have run into a situation. I would like to add fog effects to my games, but I don't even know where to start to implement this. I know how to set the camera's far frustum to limit the render distance, but that just simply makes a sharp cutoff. I'd like the fog it up a bit to make it feel more natural. I'm looking for an answer that points me into the correct direction. I'm not looking for specific code snippets or even JME3's engine specifics. I just want to get an idea of how this stuff works in general. Thanks!

    Read the article

  • Alternative Web model

    - by Above The Gods
    One of the problems web apps have against native apps, especially on the mobile front, is the constant need to re-download each web page on request. Ultimately, this leads to slower performance. Why if web apps only download new pages if they're actually needed, not because they're simply requested. For example: perhaps the server can store a web page version in a cookie. Every slight change to the page on the server-side changes the version number. Now instead of the browser requesting a new page each time, why not just check the version number and have the server send the page if they're different? If the page similar, the user can just use a cached page. I'm sure browsers doesn't necessarily have to change to accommodate changes to this, correct?

    Read the article

  • What's the difference in content between the Ubuntu and Lubuntu Software Centers? Why aren't winners of the app showdown available in the latter?

    - by vasa1
    What's the difference in content between the Ubuntu and Lubuntu Software Centers? I've seen this Are softwares installable on ubuntu also installable on lubuntu? which indicates that the content should be the same. However, I looked for Ridual, OrthCal, Cuttlefish and MenuLibre. While all four are available in the Ubuntu Software Center, they aren't listed in the Lubuntu Software Center. Will they eventually be listed? Who decides? I'm asking because one of the criteria in the apps showdown, if I understood correctly, was desktop integration. If by "desktop", Unity is specifically implied then it would make sense not including them in the Lubuntu Software Center. Is that correct? I'm on 12.04.

    Read the article

< Previous Page | 356 357 358 359 360 361 362 363 364 365 366 367  | Next Page >