Search Results

Search found 11618 results on 465 pages for 'shared storage'.

Page 224/465 | < Previous Page | 220 221 222 223 224 225 226 227 228 229 230 231  | Next Page >

  • Library and several small programs that use it: how should I structure my git repository?

    - by Dan
    I have some code that uses a library that I and others frequently modify (usually only by adding functions and methods). We each keep a local fork of the library for our own use. I also have a lot of small "driver" programs (~100 lines) that use the library and are used exclusively by me. Currently, I have both the driver programs and the library in the same repository, because I frequently make changes to both that are logically connected (adding a function to the library and then calling it). I'd like to merge my fork of the library with my co-workers' forks, but I don't want the driver programs to be part of the merged library. What's the best way to organize the git repositories for a large, shared library that needs to be merged frequently and a number of small programs that have changes that are connected to changes in the library?

    Read the article

  • Where are the systray icons for Dropbox in Ubuntu desktop 13.04 (minimal)?

    - by samvv
    I reinstalled Ubuntu desktop using the minimal CD image and the following command: $ sudo apt-get install ubuntu-desktop --no-install-recommends After that I used Ubuntu Software Center to make sure Unity supports application indicators: http://i.imgur.com/bYF162w.png. Everything works great, except for Dropbox. For some reason the icon doesn't appear in the tray, even though the application is running. Steam on the other hand runs just fine, so it seems like there is nothing wrong with the tray itself. According to this post the tray icons should be in /usr/shared/icons/hicolor/22x22/status but it doesn't contain any Dropbox icons. Neither do any of the other resolutions. The answer is a bit outdated, so I'm not entirely sure it is still applicable to the current version of Dropbox. I did the usual thing of reinstalling dropbox with: sudo apt-get purge nautilus-dropbox sudo apt-get install nautilus-dropbox But that didn't solve anything either. Does somebody know how to fix this issue?

    Read the article

  • Upgrade from 10.10 to 11.04

    - by hemanta pathak
    On doing an upgrade from 10.10. to 11.04 using Upgrade Manager everything works fine. But on installing a software that bundle its owns runtime environment( loader and system files) and installs the custom runtime ( basically loader) in the location where the native one resides, the above mentioned upgrade fails.(Upgrade starts and after sometime it encounters an error and aborts.) Basically , /usr/bin/dpkg throws up an error on being unable to locate a system shared library in the aforesaid third party runtime folder ( /usr/bin/dpkg should not search the third party runtime folder.Instead it should look at the system default folder) But if we remove the installed third party loader from the default system location /lib and place it in some other location , the upgrade problem goes away. This makes me believe /usr/bin/dpkg invokes(loads) the wrong loader and as such goes looking for the dependent libraries in the third party folder. Can someone take a look at this ? is there some bug with dpkg

    Read the article

  • Webhosting with custom database choice [closed]

    - by churchill614
    Possible Duplicate: How to find web hosting that meets my requirements? I am trying to find somewhere to host a website which uses OrientDB as its database. My budget doesn't stretch to a dedicated server where I can configure everything as I need it. Rather, I am hoping to find somewhere, ideally UK based, that will allow me to install/install for me OrientDB on their server, that is of the normal shared server variety. Is anybody able to point me in a good direction for this please (whilst UK is preferable it is not essential)?

    Read the article

  • Terminator Skull Crafted from Dollar Store Parts [Video]

    - by Jason Fitzpatrick
    Earlier this year we shared an Iron Man prop build made from Dollar Store parts. The same Dollar Store tinker is at it again, this time building a Terminator endoskull. James Bruton has a sort of mad tinker knack for finding odds and ends at the Dollar Store and mashing them together into novel creations. In the video below, he shows how he took a pile of random junk from the store (plastic bowls, cheap computer speakers, even the packaging the junk came in) and turned it into a surprisingly polished Terminator skull. Hit up the link below for the build in photo-tutorial format. Dollar Store Terminator Endoskull Build [via Make] How to Banish Duplicate Photos with VisiPic How to Make Your Laptop Choose a Wired Connection Instead of Wireless HTG Explains: What Is Two-Factor Authentication and Should I Be Using It?

    Read the article

  • Using Definition of Done to Drive Agile Maturity

    - by Dylan Smith
    I’ve been an Agile Coach at a lot of different clients over the years, and I want to share an approach I use to help them adopt and mature over time. It’s important to realize that “Agile” is not a black/white yes/no thing. Teams can be varying degrees of agile. I think of this as their agile maturity level. When I coach teams I want them to start out being a little agile, and get more agile as they mature. The approach I teach them is to use the definition of done as a technique to continuously improve their agile maturity over time. We’re probably all familiar with the concept of “Done Done” that represents what *actually* being done a feature means. Not just when a developer says he’s done right after he writes that last line of code that makes the feature kind-of work. Done Done means the coding is done, it’s been tested, installers and deployment packages have been created, user manuals have been updated, architecture docs have been updated, etc. To enable teams to internalize the concept of “Done Done”, they usually get together and come up with their Definition of Done (DoD) that defines all the activities that need to be completed before a feature is considered Done Done. The Done Done technique typically is applied only to features (aka User Stories). What I do is extend this to apply to several concepts such as User Stories, Sprints, Releases (and sometimes Check-Ins). During project kick-off I’ll usually sit down with the team and go through an exercise of creating DoD’s for each of these concepts (Stories/Sprints/Releases). We’ll usually start by just brainstorming a bunch of activities that could end up in these various DoD’s. Here’s some examples: Code Reviews StyleCop FxCop User Manuals Updated Architecture Docs Updated Tested by QA Tested by UAT Installers Created Support Knowledge Base Updated Deployment Instructions (for Ops) written Automated Unit Tests Run Automated Integration Tests Run Then we start by arranging these activities into the place they occur today (e.g. Do you do UAT testing only once per release? every sprint? every feature?). If the team was previously Waterfall most of these activities probably end up in the Release DoD. An extremely mature agile team would probably have most of these activities in the DoD for the User Stories (because an extremely mature agile team will probably do continuous deployment and release every story). So what we need to do as a team, is work to move these activities from their current home (Release DoD) down into the Sprint DoD and eventually into the User Story DoD (and maybe into the lower-level Check-In DoD if we decide to use that). We don’t have to move them all down to User Story immediately, but as a team we figure out what we think we’re capable of moving down to the Sprint cycle, and Story cycle immediately, and that becomes our starting DoD’s. Over time the team makes an effort to continue moving activities down from Release->Sprint->Story as they become more agile and more mature. I try to encourage them to envision a world in which they deploy to production as each User Story is completed. They would need to be updating User Manuals, creating installers, doing UAT testing (typical Release cycle activities) on every single User Story. They may never actually reach that point, but they should envision that, and strive to keep driving the activities down closer to the User Story cycle s they mature. This is a great technique to give a team an easy-to-follow roadmap to mature their agile practices over time. Sure there’s other aspects to maturity outside of this, but it’s a great technique, that’s easy to visualize, to drive agility into the team. Just keep moving those activities (aka “gates”) down the board from Release->Sprint->Story. I’ll try to give an example of what a recent client of mine had for their DoD’s (this is from memory, so probably not 100% accurate): Release Create/Update deployment Instructions For Ops Instructional Videos Updated Run manual regression test suite UAT Testing In this case that meant deploying to an environment shared across the enterprise that mirrored production and asking other business groups to test their own apps to ensure we didn’t break anything outside our system Sprint Deploy to UAT Environment But not necessarily actually request UAT testing occur User Guides updated Sprint Features Video Created In this case we decided to create a video each sprint showing off the progress (video version of Sprint Demo) User Story Manual Test scripts developed and run Tested by BA Deployed in shared QA environment Using automated deployment process Peer Code Review Code Check-In Compiled (warning-free) Passes StyleCop Passes FxCop Create installer packages Run Automated Tests Run Automated Integration Tests PS – One of my clients had a great question when we went through this activity. They said that if a Sprint is by definition done when the end-date rolls around (time-boxed), isn’t a DoD on a sprint meaningless – it’s done on the end-date regardless of whether those other activities are complete or not? My answer is that while that statement is true – the sprint is done regardless when the end date rolls around – if the DoD activities haven’t been completed I would consider the Sprint a failure (similar to not completing what was committed/planned – failure may be too strong a word but you get the idea). In the Retrospective that will become an agenda item to discuss and understand why we weren’t able to complete the activities we agreed would need to be completed each Sprint.

    Read the article

  • Extremely slow transfer speed ubuntu -> Windows

    - by Hailwood
    I have two laptops, One is running Ubuntu 12.04 (EXT4) the other is running Windows 7 (NTFS). I am copying over 40gb of data (one file) from the Ubuntu laptop to the Windows Laptop. (Browse the shared folder on Ubuntu using Windows copy/paste) But I am getting transfer speeds topping out at ~700kb/s Surely this is not right. I am transferring via wifi on both laptops. My download speeds can reach 7-8mb/s on both laptops, so I know it is not the wifi cards or the router topping out. wlan0 Link encap:Ethernet HWaddr 84:4b:f5:db:b4:85 inet addr:192.168.1.66 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::864b:f5ff:fedb:b485/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:11941185 errors:0 dropped:0 overruns:0 frame:0 TX packets:11306693 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:10087111370 (10.0 GB) TX bytes:7843524888 (7.8 GB)

    Read the article

  • Opinion for my recruitment portal idea [closed]

    - by user1498503
    I am creating a recruitment portal for IT professionals. In this, recruiters while creating a job post would be asked to create a skills requirement matrix. Essential Skills : asp.net MVC Entity Framework Desired Skills : SQL Server 2008 IIS 7.0 On the other hand job seekers would also have their own skills matrix Jobseeker #1 Core Skills : asp.net MVC Entity Framework MangoDB Secondary Skills : SQL Server 2008 IIS 7.0 Jobseeker #2 Core Skills : asp.net Web forms Secondary Skills : SQL Server 2008 IIS 7.0 So when both job seekers apply for the same job. Would it be a good idea for both of them to see each other's skills matrix for comparison?Also no personal details and CVs are shared. I think comparisons would help job seekers to understand what their areas of improvement are and could motivate to fill the skills gap. Your opinion would be appreciated. Regards

    Read the article

  • MVC .Net, WebMatrix talk presentations and webinars

    - by subodhnpushpak
    I presented sessions on MVC .Net and webmatrix. I covered stuff like what’s new in MVC .net and the architecture goodness of MVC pattern. I also demonstrated how MVC 3 / MVC 4 harness HTML 5 / mobile along with Jquery and Modernizr.  PHP coding using MVC and Webmatrix and other advanced stuff like hosting PHP on windows or porting MYSQL Db to MSSQL is also is also part of the demo in the sessions. The slide decks are available at below link and all the demo is recorded and also shared at below link.   WebMatrix View more presentations from Subodh Pushpak.   WebMatrix2 View more presentations from Subodh Pushpak.   The recordings / Demo can be accessed at and If you have any suggestions / ideas / comments; please do post.

    Read the article

  • How to set Qwt path to the run-time linker in Xubuntu

    - by Rahul
    I've successfully installed Qwt in Xubuntu 12.04(qmake - make - make install). But now I need to set the Qwt path to run time linker of Xubuntu. In manual it's given like - If you have installed a shared library it's path has to be known to the run-time linker of your operating system. On Linux systems read "man ldconfig" ( or google for it ). Another option is to use the LD_LIBRARY_PATH (on some systems LIBPATH is used instead, on MacOSX it is called DYLD_LIBRARY_PATH) environment variable. But being newbie to Linux environment, I'm not able to proceed further. Please help me with this.

    Read the article

  • Connecting Galaxy Note: Unable to mount Android Error initializing camera: -53: Could not claim the USB device

    - by Claudiu
    I'm running cm9 ics on my Galaxy Note and i tried anything i could find googleing around but to no result, if i look on the phone in usb settings there is the option for mass storage but it's grey and therefore not selectable I don`t know what the problem is but i saw somwhere that it might be an old version of libmtp so i tried to install libmtp 1.1.3 with ./configure make make install but even after when i try mtp-detect it gives me libmtp 1.1.1 is it normal? anyway when i run mtp-detect here is what it gives me libmtp version: 1.1.1 Listing raw device(s) Device 0 (VID=04e8 and PID=6860) is a Samsung GT-P7310/P7510/N7000/I9100/Galaxy Tab 7.7/10.1/S2/Nexus/Note. Found 1 device(s): Samsung: GT-P7310/P7510/N7000/I9100/Galaxy Tab 7.7/10.1/S2/Nexus/Note (04e8:6860) @ bus 2, dev 6 Attempting to connect device(s) ignoring usb_claim_interface = -99PTP_ERROR_IO: failed to open session, trying again after resetting USB interface LIBMTP libusb: Attempt to reset device ignoring usb_claim_interface = -99LIBMTP PANIC: failed to open session on second attempt Unable to open raw device 0 OK. Thanks in advamce

    Read the article

  • Mount and unmount errors when usb is connected in 12.10.

    - by Mysterio
    I just installed 12.10 and noticed some strange behaviour every time I connect a USB storage device. It reports I already have the device mounted by a different user but there is only one user account on this set up. Here is a picture of the dialogue: Also when I try to mount the inserted USB drive, I have to enter gain root privileges before the device will unmount successfully. Like this: . I can't copy/delete any file without root access either with the default file manager or third party file manager, Marlin. How do I resolve this rather frustrating experience? Thanks in advance.

    Read the article

  • JPRT: A Build & Test System

    - by kto
    DRAFT A while back I did a little blogging on a system called JPRT, the hardware used and a summary on my java.net weblog. This is an update on the JPRT system. JPRT ("JDK Putback Reliablity Testing", but ignore what the letters stand for, I change what they mean every day, just to annoy people :\^) is a build and test system for the JDK, or any source base that has been configured for JPRT. As I mentioned in the above blog, JPRT is a major modification to a system called PRT that the HotSpot VM development team has been using for many years, very successfully I might add. Keeping the source base always buildable and reliable is the first step in the 12 steps of dealing with your product quality... or was the 12 steps from Alcoholics Anonymous... oh well, anyway, it's the first of many steps. ;\^) Internally when we make changes to any part of the JDK, there are certain procedures we are required to perform prior to any putback or commit of the changes. The procedures often vary from team to team, depending on many factors, such as whether native code is changed, or if the change could impact other areas of the JDK. But a common requirement is a verification that the source base with the changes (and merged with the very latest source base) will build on many of not all 8 platforms, and a full 'from scratch' build, not an incremental build, which can hide full build problems. The testing needed varies, depending on what has been changed. Anyone that was worked on a project where multiple engineers or groups are submitting changes to a shared source base knows how disruptive a 'bad commit' can be on everyone. How many times have you heard: "So And So made a bunch of changes and now I can't build!". But multiply the number of platforms by 8, and make all the platforms old and antiquated OS versions with bizarre system setup requirements and you have a pretty complicated situation (see http://download.java.net/jdk6/docs/build/README-builds.html). We don't tolerate bad commits, but our enforcement is somewhat lacking, usually it's an 'after the fact' correction. Luckily the Source Code Management system we use (another antique called TeamWare) allows for a tree of repositories and 'bad commits' are usually isolated to a small team. Punishment to date has been pretty drastic, the Queen of Hearts in 'Alice in Wonderland' said 'Off With Their Heads', well trust me, you don't want to be the engineer doing a 'bad commit' to the JDK. With JPRT, hopefully this will become a thing of the past, not that we have had many 'bad commits' to the master source base, in general the teams doing the integrations know how important their jobs are and they rarely make 'bad commits'. So for these JDK integrators, maybe what JPRT does is keep them from chewing their finger nails at night. ;\^) Over the years each of the teams have accumulated sets of machines they use for building, or they use some of the shared machines available to all of us. But the hunt for build machines is just part of the job, or has been. And although the issues with consistency of the build machines hasn't been a horrible problem, often you never know if the Solaris build machine you are using has all the right patches, or if the Linux machine has the right service pack, or if the Windows machine has it's latest updates. Hopefully the JPRT system can solve this problem. When we ship the binary JDK bits, it is SO very important that the build machines are correct, and we know how difficult it is to get them setup. Sure, if you need to debug a JDK problem that only shows up on Windows XP or Solaris 9, you'll still need to hunt down a machine, but not as a regular everyday occurance. I'm a big fan of a regular nightly build and test system, constantly verifying that a source base builds and tests out. There are many examples of automated build/tests, some that trigger on any change to the source base, some that just run every night. Some provide a protection gateway to the 'golden' source base which only gets changes that the nightly process has verified are good. The JPRT (and PRT) system is meant to guard the source base before anything is sent to it, guarding all source bases from the evil developer, well maybe 'evil' isn't the right word, I haven't met many 'evil' developers, more like 'error prone' developers. ;\^) Humm, come to think about it, I may be one from time to time. :\^{ But the point is that by spreading the build up over a set of machines, and getting the turnaround down to under an hour, it becomes realistic to completely build on all platforms and test it, on every putback. We have the technology, we can build and rebuild and rebuild, and it will be better than it was before, ha ha... Anybody remember the Six Million Dollar Man? Man, I gotta get out more often.. Anyway, now the nightly build and test can become a 'fetch the latest JPRT build bits' and start extensive testing (the testing not done by JPRT, or the platforms not tested by JPRT). Is it Open Source? No, not yet. Would you like to be? Let me know. Or is it more important that you have the ability to use such a system for JDK changes? So enough blabbering on about this JPRT system, tell me what you think. And let me know if you want to hear more about it or not. Stay tuned for the next episode, same Bloody Bat time, same Bloody Bat channel. ;\^) -kto

    Read the article

  • Collaboration platforms

    - by Thomas
    Are there any good collaboration platforms for game development? This would include the following features: Easy way to find various people you need to build games (programmer, artist etc) and forming a team like for example codeplex Online portfolio for users where they can offer their services (either paid or free) Posibility to create a game specific blog or site with social media integration to show the world what's being created Easy way to manage game content / resources with sufficient online storage, version control and if possible source control Manage all phases of game development (startup, creating concept, finding a team, creating proof of concept, production phase etc) and publish specific information for each phase also on social media etc. Manage asset creation flow (request for specific content like a sound, production of sound, uploading the sound, notification to the requester, implementation of the file, retouching in several cycles etc)

    Read the article

  • AngularJS: structuring a web application with multiple ng-apps

    - by mg1075
    The blogosphere has a number of articles on the topic of AngularJS app structuring guidelines such as these (and others): http://www.johnpapa.net/angular-app-structuring-guidelines/ http://codingsmackdown.tv/blog/2013/04/19/angularjs-modules-for-great-justice/ http://danorlando.com/angularjs-architecture-understanding-modules/ http://henriquat.re/modularizing-angularjs/modularizing-angular-applications/modularizing-angular-applications.html However, one scenario I have yet to come across for guidelines and best practices is the case where you have a large web application containing multiple "mini-spa" apps, and the mini-spa apps all share a certain amount of code. I am not referring to the case of trying to have multiple ng-app declarations on the same page; rather, I mean different sections of a large site that have their own, unique ng-app declaration. As Scott Allen writes in his OdeToCode blog: One scenario I haven't found addressed very well is the scenario where multiple apps exist in the same greater web application and require some shared code on the client. Are there any recommended approaches to take, pitfalls to avoid, or good sample structures of this scenario that you can point to?

    Read the article

  • Cannot read from 2nd SATA data drive connected via SATA docking station

    - by Robbo
    Installed 10.10 this week on dual boot system. Everything else works fine but cannot read from 2nd SATA drive with all my data. Same drive works normally when booted to Windows XP. Interesting part is that I can see the drive in Ubuntu Disk Manager, can read all its attributes, can test it, shows up in Disk Manager, Storage Device Manager and Mount Manager, and can mount it, even change attributes; it appears healthy but does not show up in "Computer" or anywhere else that it can be accessed. The drive is connected via an external e-SATA docking station which is connected to a SATA port on the motherboard.

    Read the article

  • KVM guest disk performance

    - by Alex
    My KVM guest does max. 200MB/s although the host does easily 700MB/s (Raid 0 with 4 SSDs). Configuration: File-based storage (raw), cache none. Host 24 cores, 96GB ram, Ubuntu 12.04.1 LTS and virt-manager. I suspect the CPU to be the bottleneck (one core goes up during hdparm). Anyone experienced the same or has an explanation ? Edit: one more info: guest is the same as host (Ubuntu 12). Same poor disk performance observed with Windows 2008 R2 and Suse Enterprise Linux (9 or 10 I think). Max 1 guest running.

    Read the article

  • ideas for a personal website [on hold]

    - by user1314836
    I am planning to register a personal domain + hosting space to be less dependant on external companies. I would like to know if you could share some ideas of what I could do with my own domain. I have been thinking in some of them... Use my own e-mail (but Google Apps is no longer free...). Share my photos instead of using Dropbox. Receiving big files or many files through anonymous FTP. Occasional backups? (I don't know if my host would let me know use the hosting for personal storage). Any other ideas or comments on the above?

    Read the article

  • Situations that require protecting files against tampering when stored on a users computer

    - by Joel
    I'm making a 'Pokémon Storage System' with a Client/Server model and as part of that I was thinking of storing an inventory file on the users computer which I do not wish to be edited except by my program. An alternative to this would be to instead to store the inventory file on the server and control it's editing by sending commands to the server but I was wondering if there are any situations which require files to be stored on a users computer where editing would be undesirable and if so how do you protect the files? I was thinking AES with some sort of checksum?

    Read the article

  • What could be a reason for cross-platform server applications developer to make his app work in multiple processes?

    - by Kabumbus
    So we consider a server app development - heavily loaded with messing with big data streams.An app will be running on one powerful server. a server app shall be developed in form of crossplatform application - so to work on Windows, Mac OS X and Linux. So same code many platforms for standing alone server architecture. We wonder what benefits does distributing applications not only over threads but over processes as wall would bring to programmers and to server end users and why? Some people sad to me that even having 48 cores, 4 process threads would be shared via OS throe all cores... is it true BTW?

    Read the article

  • is it possible for windows viruses when downloaded through ubuntu affect my windows os

    - by fr33c0untry
    I know that Ubuntu is immune to virus so there is no question of it getting infected while browsing the net.however i frequently transfer files from my pendrive (which i get from other virus infested computers) to my own laptop and save it on the data drive which is shared by both windows and ubuntu.i would like to know if there is a chance for windows viruses which might get saved and then infect it whenever i switch to windows later on.its ironic that i scan my pendrive using avast on windows and then save all my files to my hard drive to keep my laptop free from virus eventhough i have ubuntu.can anyone suggest an alternative.thanks in advance.

    Read the article

  • Audio playback: part of song is skipped

    - by Homulvas
    I am experiencing some problems with music playback after upgrading to Ubuntu 12.10. Basically some of the songs stop playing after some time as if the song has ended. It's always the same songs and the same time. The weird thing that it happens with Clementine and Totem but VLC doesn't have this problem and it also plays as it should on Windows. I'm guessing there might be a problem with some library that's shared with by the first two applications. I don't know if it's relevant but the file format of the audio files is flac(don't know if the problem affects mp3, because I don't have many of them).

    Read the article

  • gnome-file-share-properties doesn't work

    - by Riccardo Magrini
    I've configured gnome-file-share-properties on all my Ubuntu's PC for sharing the directory Public to each other. I following some guide found on Internet for the configuration of it, all explain the same procedure but in my case I don't see any Public directory shared with the PC. Following this link http://library.gnome.org/users/gnome-user-share/stable/gnome-user-share-getting-started.html.en I'd see the directory Public plus the name of PC that shares its directory on Nautilus Places. In my case I don't see anything, therefore on the Network place see all the machines 'n if I try to click on one receive this: "DBus error org.freedesktop.DBus.Error.NoReply: Message did not receive a reply (timeout by message bus)" note: I don't want to use Samba because I've all Ubuntu PC, and the firewall is disabled on all PC.

    Read the article

  • 5 Reasons to Upgrade to WebLogic Server 11g

    - by ruma.sanyal
    Do you want to optimize your middleware performance and manageability? Are you looking to modernize your IT infrastructure and lower your total cost of ownership? Don't miss this upcoming Webcast to learn five reasons why you should switch to Oracle WebLogic Server 11g. Mike Lehmann, Senior Director of Product Management for Oracle WebLogic Server, will share best practices and helpful tips for a fast, low-risk upgrade. You will also learn how your company can leverage the optimal support, rich capabilities, and extensive options in Oracle WebLogic Server 11g to: Diagnose and fix performance issues Improve data center utilization and density Shorten application release cycles Run applications in a shared services infrastructure Manage heterogeneous infrastructures Register for this complimentary Webcast.

    Read the article

  • Can only connect to file server on second attempt

    - by Ross Fleming
    I have a FreeNas file server on my local network and I usually connect to it from Windows and Ubuntu computers. Ever since I have upgraded from Ubuntu 12.04 to 12.10 Ubuntu will only connect after a second attempt. By which I mean, I will browse to it via the file manager and once I click on the link in "Bookmarks" it complains that it could not connect. If I then try again it connects successfully and will keep up it's connection until the laptop is suspended or looses connection to the LAN for whatever reason. This isn't much of a problem as I don't mind having to click twice but my real problem is that this means that my scheduled backup will complain that it cannot connect to the storage device if it has not already been accessed during the current session. If there is some way to either stop the issue all together or to force the backup tool (default) to immediately have a second attempt at connecting.

    Read the article

< Previous Page | 220 221 222 223 224 225 226 227 228 229 230 231  | Next Page >