Search Results

Search found 68249 results on 2730 pages for 'sudo work'.

Page 377/2730 | < Previous Page | 373 374 375 376 377 378 379 380 381 382 383 384  | Next Page >

  • Cannot see shared folder in /mnt/hgfs

    - by blasto
    I am trying to share a folder between Lubuntu 13.04 (in VMware player) and Windows 7 64 bit. I followed a tutorial till step 16. I typed a command and saw nothing. I also went into the /mnt/hgfs folder and saw nothing there. How do I fix this ? http://theholmesoffice.com/how-to-share-folders-between-windows-and-ubuntu-using-vmware-player/ Command - dir /mnt/hgfs EXTRAS - By the way, this is how I actually reached step 16. Step 12 - sudo apt-get install hgfsclient Step 14 - If it does not work, then follow this tutorial - http://www.liberiangeek.net/2013/03/how-to-quickly-install-vmware-tools-in-ubuntu-13-04-raring-ringtail/ Step 16 - STUCK !!!

    Read the article

  • I cant see my windows drive in ubuntu

    - by Shardul Dhande
    I'm n00ber than any n00b you ever seen, so bear with me... I just downloaded wubi and installed ubuntu 12.10 along with Windows 7 on the same HDD. Linux works fine, but doesnt show files from drive C (on which it is installed). I really need C as it contains all my files. I did df -h in the terminal. It does show the drive. /dev/sda2 577G 220G 358G 38% /host Now I tried to mount the drive sudo mount /dev/sda2 and got the response mount: /dev/sda2 already mounted or /host busy mount: according to mtab, /dev/sda2 is already mounted on /host I dont know what that means. The drive doesnt show up in HOME FOLDER. What am i supposed to do now?

    Read the article

  • How come I can not install plugins on my local Wordpress install?

    - by classer
    Hello, I got WordPress up and running fine on Ubuntu 10.04 by using this source except that when I try to update and install themes/plugins I get this following error message in wp-admin page: Installing Plugin: WordPress.com Stats 1.8.1 Downloading install package from http://downloads.wordpress.org/plugin/stats.1.8.1.zip… Unpacking the package… Could not create directory. /var/www/wordpress/wp-content/upgrade/stats.tmp/stats Actions: Return to Plugin Installer At first I thought I had to setup an FTP account but searched more and I found some information that says that I need to change the permissions of the wp-content folder which is located in the directory: /var/www/wordpress/wp-content I tried changing it by doing: sudo chmod -R 777 wp-content/ but when I tried installing a plugin I got the same error message. I also tried passing it 755 as a permission but still got the same thing. I settled on 755 because it is more secure I have read. How can I solve this problem safely and securely?

    Read the article

  • Making Multilingual J! 1.5 + + Joomfish + VM 1.17 more workable

    - by rhand
    I have been working with a multilingual Joomla! 1.5.23 e-commerce website for a client for quite a while and made several customizations. But the client is still not happy he has to adjust content at at least three locations: Joomfish Virtuemart Article Manager Joomfish is nice in the way that it allows you to create multilingual content and copy and paste the source language on the same page, which makes translation work easier but it is annoying in the way you have to edit several custom fields at different locations/ content types. As Joomla! source language content still needs to be created in the article manager first this is the second location the client has to work at. The third location is Virtuemart. Here all the products and product categories are created. And here we added some custom fields as well. Now I was considering upgrading the website to Joomla 1.7 or later on to 1.8. This J! versions have better multilingual support. But I wonder if er can really make the client's life easier. We will still have to copy the source language to a new article and create content in another language. We will still have the issue of content in custom fields that needs to be translated and we will still have to create content. Should I go for another CMS such as Magento or do you think there is a way in a more recent Joomla! version to work with all content in one or max two locations?

    Read the article

  • Are your personal insecurities screwing up your internal communications?

    - by Lucy Boyes
    I do some internal comms as part of my job. Quite a lot of it involves talking to people about stuff. I’m spending the next couple of weeks talking to lots of people about internal comms itself, because we haven’t done a lot of audience/user feedback gathering, and it turns out that if you talk to people about how they feel and what they think, you get some pretty interesting insights (and an idea of what to do next that isn’t just based on guesswork and generalising from self). Three things keep coming up from talking to people about what we suck at  in terms of internal comms. And, as far as I can tell, they’re all examples where personal insecurity on the part of the person doing the communicating makes the experience much worse for the people on the receiving end. 1. Spending time telling people how you’re going to do something, not what you’re doing and why Imagine you’ve got to give an update to a lot of people who don’t work in your area or department but do have an interest in what you’re doing (either because they want to know because they’re curious or because they need to know because it’s going to affect their work too). You don’t want to look bad at your job. You want to make them think you’ve got it covered – ideally because you do*. And you want to reassure them that there’s lots of exciting work going on in your area to make [insert thing of choice] happen to [insert thing of choice] so that [insert group of people] will be happy. That’s great! You’re doing a good job and you want to tell people about it. This is good comms stuff right here. However, you’re slightly afraid you might secretly be stupid or lazy or incompetent. And you’re exponentially more afraid that the people you’re talking to might think you’re stupid or lazy or incompetent. Or pointless. Or not-adding-value. Or whatever the thing that’s the worst possible thing to be in your company is. So you open by mentioning all the stuff you’re going to do, spending five minutes or so making sure that everyone knows that you’re DOING lots of STUFF. And the you talk for the rest of the time about HOW you’re going to do the stuff, because that way everyone will know that you’ve thought about this really hard and done tons of planning and had lots of great ideas about process and that you’ve got this one down. That’s the stuff you’ve got to say, right? To prove you’re not fundamentally worthless as a human being? Well, maybe. But probably not. See, the people who need to know how you’re going to do the stuff are the people doing the stuff. And those are the people in your area who you’ve (hopefully-please-for-the-love-of-everything-holy) already talked to in depth about how you’re going to do the thing (because else how could they help do it?). They are the only people who need to know the how**. It’s the difference between strategy and tactics. The people outside of your bubble of stuff-doing need to know the strategy – what it is that you’re doing, why, where you’re going with it, etc. The people on the ground with you need the strategy and the tactics, because else they won’t know how to do the stuff. But the outside people don’t really need the tactics at all. Don’t bother with the how unless your audience needs it. They probably don’t. It might make you feel better about yourself, but it’s much more likely that Bob and Jane are thinking about how long this meeting has gone on for already than how personally impressive and definitely-not-an-idiot you are for knowing how you’re going to do some work. Feeling marginally better about yourself (but, let’s face it, still insecure as heck) is not worth the cost, which in this case is the alienation of your audience. 2. Talking for too long about stuff This is kinda the same problem as the previous problem, only much less specific, and I’ve more or less covered why it’s bad already. Basic motivation: to make people think you’re not an idiot. What you do: talk for a very long time about what you’re doing so as to make it sound like you know what you’re doing and lots about it. What your audience wants: the shortest meaningful update. Some of this is a kill your darlings problem – the stuff you’re doing that seems really nifty to you seems really nifty to you, and thus you want to share it with everyone to show that you’re a smart person who thinks up nifty things to do. The downside to this is that it’s mostly only interesting to you – if other people don’t need to know, they likely also don’t care. Think about how you feel when someone is talking a lot to you about a lot of stuff that they’re doing which is at best tangentially interesting and/or relevant. You’re probably not thinking that they’re really smart and clearly know what they’re doing (unless they’re talking a lot and being really engaging about it, which is not the same as talking a lot). You’re probably thinking about something totally unrelated to the thing they’re talking about. Or the fact that you’re bored. You might even – and this is the opposite of what they’re hoping to achieve by talking a lot about stuff – be thinking they’re kind of an idiot. There’s another huge advantage to paring down what you’re trying to say to the barest possible points – it clarifies your thinking. The lightning talk format, as well as other formats which limit the time and/or number of slides you have to say a thing, are really good for doing this. It’s incredibly likely that your audience in this case (the people who need to know some things about your thing but not all the things about your thing) will get everything they need to know from five minutes of you talking about it, especially if trying to condense ALL THE THINGS into a five-minute talk has helped you get clear in your own mind what you’re doing, what you’re trying to say about what you’re doing and why you’re doing it. The bonus of this is that by being clear in your thoughts and in what you say, and in not taking up lots of people’s time to tell them stuff they don’t really need to know, you actually come across as much, much smarter than the person who talks for half an hour or more about things that are semi-relevant at best. 3. Waiting until you’ve got every detail sorted before announcing a big change to the people affected by it This is the worst crime on the list. It’s also human nature. Announcing uncertainty – that something important is going to happen (big reorganisation, product getting canned, etc.) but you’re not quite sure what or when or how yet – is scary. There are risks to it. Uncertainty makes people anxious. It might even paralyse them. You can’t run a business while you’re figuring out what to do if you’ve paralysed everyone with fear over what the future might bring. And you’re scared that they might think you’re not the right person to be in charge of [thing] if you don’t even know what you’re doing with it. Best not to say anything until you know exactly what’s going to happen and you can reassure them all, right? Nope. The people who are going to be affected by whatever it is that you don’t quite know all the details of yet aren’t stupid***. You wouldn’t have hired them if they were. They know something’s up because you’ve got your guilty face on and you keep pulling people into meeting rooms and looking vaguely worried. Here’s the deal: it’s a lot less stressful for everyone (including you) if you’re up front from the beginning. We took this approach during a recent company-wide reorganisation and got really positive feedback. People would much, much rather be told that something is going to happen but you’re not entirely sure what it is yet than have you wait until it’s all fixed up and then fait accompli the heck out of them. They will tell you this themselves if you ask them. And here’s why: by waiting until you know exactly what’s going on to communicate, you remove any agency that the people that the thing is going to happen to might otherwise have had. I know you’re scared that they might get scared – and that’s natural and kind of admirable – but it’s also patronising and infantilising. Ask someone whether they’d rather work on a project which has an openly uncertain future from the beginning, or one where everything’s great until it gets shut down with no forewarning, and very few people are going to tell you they’d prefer the latter. Uncertainty is humanising. It’s you admitting that you don’t have all the answers, which is great, because no one does. It allows you to be consultative – you can actually ask other people what they think and how they feel and what they’d like to do and what they think you should do, and they’ll thank you for it and feel listened to and respected as people and colleagues. Which is a really good reason to start talking to them about what’s going on as soon as you know something’s going on yourself. All of the above assumes you actually care about talking to the people who work with you and for you, and that you’d like to do the right thing by them. If that’s not the case, you can cheerfully disregard the advice here, but if it is, you might want to think about the ways above – and the inevitable countless other ways – that making internal communication about you and not about your audience could actually be doing the people you’re trying to communicate with a huge disservice. So take a deep breath and talk. For five minutes or so. About the important things. Not the other things. As soon as you possibly can. And you’ll be fine.   *Of course you do. You’re good at your job. Don’t worry. **This might not always be true, but it is most of the time. Other people who need to know the how will either be people who you’ve already identified as needing-to-know and thus part of the same set as the people in you’re area you’ve already discussed this with, or else they’ll ask you. But don’t bring this stuff up unless someone asks for it, because most of the people in the audience really don’t care and you’re wasting their time. ***I mean, they might be. But let’s give them the benefit of the doubt and assume they’re not.

    Read the article

  • Fresh install on SSD with Ubuntu and Windows Vista, using whole disk encryption for Ubuntu

    - by nategator
    I would like to do a fresh install on a OCZ Vertex Plus R2 SSD 60GB drive I purchased on the cheap. Since the AES-encryption looks like it may not work optimally for this drive, I would like to set up a dual-boot to Windows Vista (the only Windows copy I have for clean install purposes) and Ubuntu 12.04 with the best encryption scheme possible. My plan is to have Windows around just in case I need to use a program that won't work with Wine and Ubuntu as my daily OS with all of my information secured in case the laptop is ever stolen or sold. Although this setup will not provide a lot of space, I think I can squeeze both OSes and have enough for second-computer office tasks. So, my questions are: Which OS should I install first, Ubuntu or Vista? Any special considerations when partitioning the drive? How should I install Ubuntu to ensure full disk encryption for the Linux partition(s) and or my daily computing? Is there a significant performance upgrade with doing a solo install of Ubuntu instead of a dual boot setup? Will TRIM, for example, work correctly? Are there any significant security concerns with going the route of a dual-boot, other than the fact that any activity on Windows may be fully recoverable if the drive is stolen or sold? Thanks in advance!

    Read the article

  • Time out while mounting samba share

    - by nullDev
    I am trying to mount a hard-disk connected to my WDTV Live box. The following command smbclient -L 192.168.1.2 -U guest gives the following output: Domain=[WORKGROUP] OS=[Unix] Server=[Samba 3.5.1] Sharename Type Comment --------- ---- ------- Expansion_Drive Disk Expansion_Drive MICROVAULT Disk MICROVAULT IPC$ IPC IPC Service (WDTV LIVE) Domain=[WORKGROUP] OS=[Unix] Server=[Samba 3.5.1] Server Comment --------- ------- WDTVLIVE WDTV LIVE Workgroup Master --------- ------- WORKGROUP But if I try sudo smbmount //WDTVLIVE/Expansion_Drive /home/ashish/wdtvlive/ -o guest,rw I get the following: Warning: mapping 'guest' to 'guest,sec=none' mount error(110): Connection timed out Refer to the mount.cifs(8) manual page (e.g. man mount.cifs) I am able to browse and mount through Nautilus as well, but I dont want the drive to be mounted at gvfs.

    Read the article

  • disable shutdown/suspend if there is other user logged in via ssh

    - by Denwerko
    I remember that in versions of ubuntu around 9.04 was possible to disable user to shutdown ( and maybe suspend too ) system if there was other user logged in.Something like policykit or similar. Is it possible to do in 11.04 ? Thanks edit: if someone needs ( for own risk ), little change in /usr/lib/pm-utils/bin/pm-action will allow user to suspend machine if he is only user logged in or when user will run sudo pm-suspend. Probably not best piece of code, but for now works. diff -r 805887c5c0f6 pm-action --- a/pm-action Wed Jun 29 23:32:01 2011 +0200 +++ b/pm-action Wed Jun 29 23:37:23 2011 +0200 @@ -47,6 +47,14 @@ exit 1 fi +if [ "$(id -u )" == 0 -o `w -h | cut -f 1 -d " " | sort | uniq | wc -l` -eq 1 ]; then + echo "either youre root or root isnt here and youre only user, continuing" 1&2 + else + echo "Not suspending, root is here or there is more users" 1&2 + exit 2 + fi + + remove_suspend_lock() { release_lock "${STASHNAME}.lock" Question still stands, is it possible to forbid shutdown or suspend when there is more than one user logged in ( without rewriting system file )?

    Read the article

  • Ubuntu 12.04 Faster boot, Hibernate & other questions

    - by Samarth Shukla
    I've recently started exploring Ubuntu (my 1st distro). I fresh installed precise without a swap (4GB ram). The only issues are, slow boot (regardless of the swap) and instability after a few days of installation. The runtime performance is immaculate otherwise. Even though not needed, I still set swappiness = 10. I've tried the quiet splash profile to GRUB; already have preload installed. But it still is pretty slow. I am not too confident on recompiling the kernel yet. But you could please advice me on that too. I've also added the following to fstab: #Move /tmp to RAM: tmpfs /tmp tmpfs defaults,noexec,nosuid 0 0 (Also if you could please tell me the exact implication/scope of this tweak on physical ram & the swap.) But nothing has happened really. So what alternatives are there to make it boot faster? Also, right after fresh install, though no swap partition, the system still showed /dev/zram0 of arond 2GB which was never used (probably because of the above fstab edit). Finally, I experimented with Hibernate a little, but many claim that it doesn't work on 12.04. (Not to mention, I made a swap file of 4GB for it). What I did was: sudo gedit /var/lib/polkit-1/localauthority/50-local.d/hibernate.pkla Then I added the following lines, saved the file, and closed the text editor: [Re-enable Hibernate] Identity=unix-user:* Action=org.freedesktop.upower.hibernate ResultActive=yes I also edited the upower policy for hibernate: gksudo gedit /usr/share/polkit-1/actions/org.freedesktop.upower.policy I added these lines: < allow_inactive >no< /allow_inactive > < allow_active >yes< /allow_active > But it did not work. So is there an alternate method perhaps that can make it work on 12.04?

    Read the article

  • Issues with Dz77BH-55K Motherboard and i7 processor on 12.04

    - by Naveed
    I just built a computer with Intel's DZ77BH-55K motherboard with i7-3770 processor. On 12.04, 11.10, and 11.04 and Linux Mint 12, the computer has been really laggy. The graphics aren't working (choppy effects, bad resolution) and the keyboard and mouse inputs are even laggy and unreliable (skips keystrokes). I'm not sure what the problem is or what I can do to fix it. I tried sudo apt-get install mesa-utils but nothing changed. I've also messed around in the BIOS but no luck there either. Any ideas? Could it possibly be a hardware issue?

    Read the article

  • No HDMI sound output on Thinkpad X1

    - by nickf
    I'm having problems getting my sound to output via HDMI to my TV. When I go to Sound Settings, the HDMI device does not appear. ~$ aplay -l **** List of PLAYBACK Hardware Devices **** card 0: PCH [HDA Intel PCH], device 0: CONEXANT Analog [CONEXANT Analog] Subdevices: 1/1 Subdevice #0: subdevice #0 card 0: PCH [HDA Intel PCH], device 3: HDMI 0 [HDMI 0] Subdevices: 1/1 Subdevice #0: subdevice #0 card 0: PCH [HDA Intel PCH], device 7: HDMI 1 [HDMI 1] Subdevices: 1/1 Subdevice #0: subdevice #0 card 0: PCH [HDA Intel PCH], device 8: HDMI 2 [HDMI 2] Subdevices: 1/1 Subdevice #0: subdevice #0 I don't know if the video information is helpful, but anyway: ~$ sudo lshw -C video *-display description: VGA compatible controller product: 2nd Generation Core Processor Family Integrated Graphics Controller vendor: Intel Corporation physical id: 2 bus info: pci@0000:00:02.0 version: 09 width: 64 bits clock: 33MHz capabilities: msi pm vga_controller bus_master cap_list rom configuration: driver=i915 latency=0 resources: irq:46 memory:d0000000-d03fffff memory:c0000000-cfffffff ioport:5000(size=64) Any suggestions for me?

    Read the article

  • Nautilus only starts as root user

    - by user7978
    Hello. I am running Ubuntu 10.04 64-bit. When I attempt to start Nautilus from the command line, it does not appear -- although a PID is generated. As root/sudo, I can start Nautilus fine. One note: I run e16 as the windows manager, so I do not use Nautilus to draw my desktop. However, even under this configuration, Nautilus used to run fine as a "regular" user. The permissions for Nautilus are the same as the other packages in /usr/bin. I believe this is a Gnome issue, but I'm fumbling at this point.

    Read the article

  • Disable incognito in chrome or chromium

    - by TheIronKnuckle
    I'm addicted to certain websites to the point where it's interfering with my life regularly and sick of it. I want to install website blockers that aren't easy to circumvent. In Chrome, incognito mode is easily accessible with a ctrl-shift-n. That is ridiculous. Whenever I feel an urge to go on an addictive website, it doesn't matter what blockers and regulators I've got installed; three keys can get round them in a second. Simply uninstalling chrome isn't an option either, as it's way too easy to sudo apt-get install it right back. So yes, I want to disable incognito mode completely (and if possible making it totally impossible to get it back). I note that some guy has figured out how to do it on windows with a registry entry: http://wmwood.net/software/incognito-gone-get-rid-of-private-browsing/ If it can be done on windows it can be done on ubuntu!

    Read the article

  • Oracle Virtual Desktop Client with USB smart card reader

    - by wim.coekaerts
    I have my Sun Ray thin client at home which I use religiously, I use a Sun Ray 3i at work as my main desktop and just always take my smart card home and happily continue with the hot desking feature. We released a software version of the Sun Ray client called Oracle Virtual Desktop Client (OVDC). There is a version for Windows, Linux and Mac OS X. I have a minimac at home and I installed OVDC on it, which of course works great but since I like to re-connect to my session that I use at work, I wanted to try out the external usb smart card reader feature. I ordered a cute, low cost device online and tried it out. As expected, it worked out of the box without -any- configuration. I took the device, plugged it into my minimac, started OVDC, plugged in my smartcard and I got the password screen (screensaver) to get into my sun ray session on my server at work. Nothing new here, this is a feature that's been in the product but I had never tried it before and it works out of the box and is super easy and I just felt like sharing :-) Here are a few pictures : (1) login screen (2) smart cardreader without card (3) password screen (4) smart card reader with card

    Read the article

  • Webserver insists on opening "blog1.php" instead of "index.php"

    - by pepoluan
    I'm at my wits' end. I have just ripped out a website and in the process of rebuilding everything. Previously, the 'home page' of the website is a blog, with the address "www.mydomain.com/blog1.php". After exporting everything, I deleted the whole directory, and -- based on request -- immediately create a blog/ directory. The idea is to get the blog back up as soon as possible, and temporarily redirect people accessing www.mydomain.com to the blog. Accessing the blog via http://www.mydomain.com/blog/ works. So I put in an index.php file containing a (temporary) redirect to the blog's address. The problem: The server insists on opening blog1.php instead of index.php. Even after we deleted all the files (including .htaccess). And even putting in a new .htaccess file with the single line of DirectoryIndex index.php doesn't work. The server stubbornly wants blog1.php. Now, the server is actually a webhosting, so I have no actual access to it. I have to do my work via cPanel. Currently, I work around this issue by creating blog1.php; but I really want to know why the server does not revert to opening index.php. Did I perhaps miss some important settings in the byzantine cPanel menu page?

    Read the article

  • XmlHttpRequest bug?

    - by valdo
    Hello all. I'm writing a program that among other things needs to download a file given its URL. I'm too lazy to implement the Http/Https protocols manually, so that I needed some library/object/function that'll do the job. Critical requirement: The download must be asynchronous. That is, the thread that issued the download must be able to do something else "while" downloading the file, plus the download must be able to be aborted anytime without any barbaric side effects (such as internal call to TerminateThread). Nice-to-have requirements: Should be able to download the file "into memory". Means - read the contents of the file as they arrive, not necessarily save it into some "file system" file. It'd be nice to have some convenient Win32 progress notification mechanism (waitable event, semahpore, completion port, etc.), rather than just periodically polling the download status. I've chosen the XmlHttpRequest COM object to do the work. It seemed to work fine enough, plus it supported asynchronous mode. However I noticed that after some period it just stops working. That is, after several successful file downloads it stops downloading anything. I periodically poll it to get its status, it reports "in-progress", but nothing actually happens, and there's no network activity. Moreover, when the same process creates another instance of XmlHttpRequest object to perform new downloads - the effect is the same. The object reports "in progress", whereas it doesn't even try to connect to the server (according to network sniffers and system TCP state). The only way to make this object work back is to restart the process. This makes me suspect that there's a sort of a bug (sorry, I meant undocumented feature) in the object. Also it's not a bug at the level of an individual object, since the problem persists when the object is destroyed and another one is created. It's probably some global state of the DLL that implements this object. Does anyone know something about this? Is this a known bug? I'm pretty sure there's no chance that I have another bug in my code, because of which it seems to me to be the bug is in the XmlHttpRequest. I've done enoughtests and spent time with the debugger to conclude without reasonable doubt that it's just the object stops working. BTW, while the object should work, I do all the waiting via MsgWaitXXXX API calls. So that if this object needs the message loop to work properly (for instance, it may create a hidden notification window and bind it to a socket via WSAAsyncSelect) - I give it the opportunity.

    Read the article

  • Upgrade to Quantal unity top bar, side bar, and window declorations missing

    - by Nicky Bailuc
    I upgraded from 12.04.1 to 12.10 via the Update Manager and the upgrade said it completed successfuly. However after rebooting the unity task bar was missing along with the launch bar and the window declorations. All compiz settings seemed to be purge deleted, and at first bootup it gave me a system error. The desktop exists, and once I remember I messed up the compiz settings and just had to press Ctrl+Alt+F1 and in the virtual terminal type "unity --reset" then "sudo reboot". Everything worked as if i reinstalled the entire operating system. This time it said “Warning no variable set. setting to :0. The reset option is now dupricated”. What am I suppose to do now? I need this fixed as soon as possible because I need a couple of certainly installed programs and the data within them (long story short).

    Read the article

  • How to setup passwordless SSH access for root user

    - by Cerin
    I need to configure a machine so software installation can be automated remotely via SSH. Following the wiki, I was able to setup SSH keys so my user can access the machine without a password, but I still need to manually enter my password when I use sudo, which obviously an automated process shouldn't have to do. Although my /etc/ssh/sshd_config has PermitRootLogin yes, I can't seem to be able to login as root, presumably because it's not a "real" account with a separate password. How do I configure SSH keys, so a process can remotely login as root on Ubuntu?

    Read the article

  • No GUI, No internet connection, please help 12.04

    - by KB_
    I am new to Ubuntu and I tried to install ubuntu server 12.04 on my laptop. now my problem is - I am unable to connect to the internet (I have wifi ONLY connection). ubuntu didn't recognize my built-in wifi on my Toshiba Satellite L505. There is no GUI. I have Terminal only, I tried Sudo apt-get update but i am getting errer msg because of no connection. 1. I need to know if there is any possible way that i can download and install driver for my wifi. 2. what other option do i have to be able to update ubuntu. Thanks KB

    Read the article

  • Objective-c design advice for use of different data sources, swapping between test and live

    - by user200341
    I'm in the process of designing an application that is part of a larger piece of work, depending on other people to build an API that the app can make use of to retrieve data. While I was thinking about how to setup this project and design the architecture around it, something occurred to me, and I'm sure many people have been in similar situations. Since my work is depending on other people to complete their tasks, and a test server, this slows work down at my end. So the question is: What's the best practice for creating test repositories and classes, implementing them, and not having to depend on altering several places in the code to swap between the test classes and the actual repositories / proper api calls. Contemplate the following scenario: GetDataFromApiCommand *getDataCommand = [[GetDataFromApiCommand alloc]init]; getDataCommand.delegate = self; [getDataCommand getData]; Once the data is available via the API, "GetDataFromApiCommand" could use the actual API, but until then a set of mock data could be returned upon the call of [getDataCommand getData] There might be multiple instances of this, in various places in the code, so replacing all of them wherever they are, is a slow and painful process which inevitably leads to one or two being overlooked. In strongly typed languages we could use dependency injection and just alter one place. In objective-c a factory pattern could be implemented, but is that the best route to go for this? GetDataFromApiCommand *getDataCommand = [GetDataFromApiCommandFactory buildGetDataFromApiCommand]; getDataCommand.delegate = self; [getDataCommand getData]; What is the best practices to achieve this result? Since this would be most useful, even if you have the actual API available, to run tests, or work off-line, the ApiCommands would not necessarily have to be replaced permanently, but the option to select "Do I want to use TestApiCommand or ApiCommand". It is more interesting to have the option to switch between: All commands are test and All command use the live API, rather than selecting them one by one, however that would also be useful to do for testing one or two actual API commands, mixing them with test data. EDIT The way I have chosen to go with this is to use the factory pattern. I set up the factory as follows: @implementation ApiCommandFactory + (ApiCommand *)newApiCommand { // return [[ApiCommand alloc]init]; return [[ApiCommandMock alloc]init]; } @end And anywhere I want to use the ApiCommand class: GetDataFromApiCommand *getDataCommand = [ApiCommandFactory newApiCommand]; When the actual API call is required, the comments can be removed and the mock can be commented out. Using new in the message name implies that who ever uses the factory to get an object, is responsible for releasing it (since we want to avoid autorelease on the iPhone). If additional parameters are required, the factory needs to take these into consideration i.e: [ApiCommandFactory newSecondApiCommand:@"param1"]; This will work quite well with repositories as well.

    Read the article

  • Unable to remove some unity lenses

    - by S Prasanth
    I removed the files, video, photos and friends lenses with the following command. sudo apt-get purge unity-lens-files unity-lens-video unity-lens-photos unity-lens-friends Although the corresponding results have disappeared from dash, only the friends tab has been removed. There still are tabs for files, video and photos, albeit empty. How do I remove these empty tabs? I use Ubuntu 13.10 Saucy Salamander. I understand that this issue didn't exist in 12.04. The directory structure of unity lenses seems to have changed from 12.04 to 13.10. Earlier the lenses were stored in /usr/share/unity/lenses/. That isn't the case now, rendering this answer inappropriate: http://askubuntu.com/a/120116/111720

    Read the article

  • Crontab opens as blank page, cannot save

    - by Sarah
    I am really not familiar with linux, and only started using it recently, so be patient with me. I am trying to control a camera on regular intervals through a script that is called upon in the crontab. When I start up the computer, I can open crontab, edit and save, and everything is executed correctly. However, I can never open crontab a second time, unless I restart the computer first. If I type crontab -e, I get a blank page, located in the /tmp directory. I can enter my commands in there, but cannot save it. I don't know if this is relevant, but when I try sudo crontab -e, I get something like "no cron installed for root". Any help is really appreciated! Sarah

    Read the article

  • A Virtual(Box) Christmas!

    - by Gilles Gravier
    Hello and merry Christmas everybody!Yes. This year, it's a v4.0 Xmas! VirtualBox just came out with a new major version release. Version 4.0 is available at : http://www.virtualbox.org/wiki/Downloads ... and of course, it's open source, and it's free. And I suggest (strongly) that you also download the free Extension Pack from the same page... Brings USB 2.0 emulation and more!I upgraded the version on my Solaris Express (get that from : http://www.oracle.com/solaris/index.html and follow the Download Solaris Express link on the right) laptop (pfexec pkgrm SUNWvbox and then pfexec pkgadd -d the .pkg in the compressed archive file, and install the extensions from the VBox GUI once it's launched). If you are on an old fashion OS without proper RBAC, replace pfexec by sudo. Which you can also do on Solaris if you really want to. :)I tell you, it's Christmas, for sure!Merry VBox!Gilles.

    Read the article

  • is there a bug with restart in edubuntu 12.04

    - by Ket
    After a clean install of edubuntu 12.04 on an Acer AO531-h netbook, restart doesn't work. The process starts normally but just before it shuts down the netbook freezes and I have to force shut down. The command "sudo reboot" has the same problem. I have no issues with shut down, only with restart. I'm absolute beginner. Netbook specs: Acer, intel atom CPU N270 @1.60GHz, 1.05 GHz 0.98GB RAM Dual booting with windows xp. No problems with windows.

    Read the article

  • You do not appear to be using the NVIDIA X driver

    - by Vishal shekhar
    my laptop has nvidia GT540m yesterday i install nvidia-current after updating fromsudo apt-add-repository ppa:ubuntu-x-swat/x-updates then i write sudo nvidia-xconfig and then reboot my desktop visual effect changes and it look good like nvidia is working but still glx i not working and nvidia-setting tells me that You do not appear to be using the NVIDIA X driver my dkms status is nvidia-current, 304.43, 3.2.0-30-generic-pae, i686: installed lspci |grep VGA output : 00:02.0 VGA compatible controller: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller (rev 09) 01:00.0 VGA compatible controller: NVIDIA Corporation GF108 [GeForce GT 540M] (rev a1) also i can not login to my admin account but when i login to standard user or guest it works but still NVIDIA X driver is not working can any one suggest something so that NVIDIA X Driver start working as i seen many forum but none worked for me earlier i tried nvidia-173 ,nvidia-current(before x-swat/updates reprository) but none works for me

    Read the article

< Previous Page | 373 374 375 376 377 378 379 380 381 382 383 384  | Next Page >