Search Results

Search found 9983 results on 400 pages for 'fuzzy c means'.

Page 172/400 | < Previous Page | 168 169 170 171 172 173 174 175 176 177 178 179  | Next Page >

  • New Responsibilities

    - by Robert May
    With the start of the new year, I’m starting new responsibilities at Veracity. One responsibility that is staying constant is my love and evangelism of Agile.  In fact, I’ll be spending more time ensuring that all Veracity teams are performing agile, Scrum specifically, in a consistent manner so that all of our clients and consultants have a similar experience. Imagine, if you will, working for a consulting company on a project.  On that project, the project management style is Waterfall in iterations.  Now you move to another project and in that project, you’re doing real Scrum, but in both cases, you were told that what you were doing was Scrum.  Rather confusing.  I’ve found, however, that this happens on many teams and many projects.  Most companies simply aren’t disciplined enough to do Scrum.  Some think that being Agile means not being disciplined.  The opposite is true! So, my goals for Veracity are to make sure that all of our consultants have a consistent feel for Scrum and what it is and how it works and then to make sure that on the projects they’re assigned to, Scrum is appropriately applied for their situation.  This will help keep them happier, but also make switching to other projects easier and more consistent.  If we aren’t doing the project management on the project, we’ll help them know what good Agile practices should look like so that they can give good advice to the client, and so that if they move to another project, they have a consistent feel. I’m really looking forward to these new duties. Technorati Tags: Agile,Scrum

    Read the article

  • ImageResizer - AzureReader2 with Azure SDK 2.2

    - by Chris Skardon
    Originally posted on: http://geekswithblogs.net/cskardon/archive/2013/10/29/imageresizer---azurereader2-with-azure-sdk-2.2.aspxSo Azure SDK 2.2 came out recently, which means I can open my azure projects in VS 2013 (yay), so I decided to do an upgrade of my MVC4 project to MVC5, I followed this link on how to do the upgrade, and generally things went ok. I fire up my app, and run into a ‘binding’ issue, that AzureReader2 was trying to use Microsoft.WindowsAzure.Storage, Version=2.1.0.0 but alas, it couldn’t find it. I am not the only one (see stackoverflow), and one solution is to run ‘Add-BindingRedirect’ from the Package Manager Console, but that didn’t solve the problem for me, as it didn’t pick up on the Azure stuff, so I resorted to adding the redirect manually. So, in short, to get AzureReader2 to work with Azure SDK 2.2, you need to add the following to your web.config: <runtime> <assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1"> <!-- Other bindings here! --> <dependentAssembly> <assemblyIdentity name="Microsoft.WindowsAzure.Storage" publicKeyToken="31bf3856ad364e35" culture="neutral"/> <bindingRedirect oldVersion="0.0.0.0-2.1.0.3" newVersion="2.1.0.3"/> </dependentAssembly> </assemblyBinding>

    Read the article

  • How to recover broken dpkg after lucid-bleed ppa-purge?

    - by TryTryAgain
    Did a ppa-purge of lucid-bleed and dpkg didn't downgrade properly and now it is broken. dpkg: PreDepends: tar (>= 1.23) but 1.22-2ubuntu1 is to be installed What scares me is when simulating the removal of dpkg I get: Removing this package may render the system unusable. Are you sure you want to do that? and then the list of packages which depend on it, which will also be removed, is obviously very long. Is it safe for me to remove dpkg just to reinstall it? How would I ensure the list of packages which were also removed are then reinstalled? Will forcing the version of dpkg help? (FYI: simulating a forced version brings up a much smaller list of applications which will also be removed). Any other suggestions? Additional information based on comments: ppa-purge log: http://pastebin.com/1kT8cLvP If I sudo apt-get install dpkg=1.15.5.6ubuntu4.5 I get The following packages have unmet dependencies: libdpkg-perl: Depends: dpkg (= 1.15.8) but 1.15.5.6ubuntu4.5 is to be installed which sucks because that means more would be broken after doing so...but when I force the version through Synaptic I get: To be removed alien, build-essential, cdbs, checkinstall, debhelper, devscripts, dpkg-dev, google-earth-stable, googleearth-package, libdpkg-perl, lintian, lsb, lsb-core, lsb-cxx, lsb-desktop, lsb-graphics, lsb-languages, lsb-multimedia, lsb-printing, lsb-qt4, lsb-security, ubuntu-dev-tools.

    Read the article

  • Rankings dropping after small URL-change WITH 301-redirect

    - by David
    Two weeks ago, we attempted to make the URLs of ca. 12 pages more search-engine friendly. We changed three things. 1. Make URLs more SEF from: /????-????/brandname.html (meaning: /aircon-price/daikin.html to: /????-brandnameinenglish-brandnameinthai.html We set up 301-redirects from the old to the new URLs. You can find an example and the link to our page here: http://bit.ly/XRoTOK There are no direct external links to the old URLs. 2. Added text to img-links from homepage to brand-pages Before those changes, we only linked to those brands with a picture, so we added some text under the picture. You can see that here, in the left submenu: http://bit.ly/XRpfoF 3. Minor changes to Title, h1-Tags, Meta Description, etc. Only minor changes, to better match the on-site optimization with targeted keywords. For example, before we used full brand names, after we used what was really searched for: from: Mitsubishi Electric Mr. Slim to: ???? Mitsubishi (means: Aircon Mitsubishi) Three days after these changes, we noticed a heavy drop (80% loss in non-paid search traffic) in rankings and traffic for those pages, and also for all pages which are sub-categorized. Rankings for all keywords not affected by the changes stayed the same. Any ideas, what happened, and how we can regain our old rankings? What we already did, was submitting a new sitemap. Help much appreciated. Best regards, David

    Read the article

  • How can I "diff" two files with Nautilus?

    - by bioShark
    I have installed Meld and found out it's a great comparing tool. Unfortunately there is no integration with Nautilus 3.2. This means, I can't right click on files and select an option to open them in Meld for comparison. I have seen in the tools comment that the tool need the diff-ext package to be installed. This package has been removed from Ubuntu universe, I am guessing because gtk 3.0. Even if I manually downloaded from source forge the diff-ext package, when I try to configure it the check fails with the message: checking for DIFF_EXT... configure: error: Package requirements (libnautilus-extension >= 2.14.0 gconf-2.0 >= 2.14.0 gnome-vfs-module-2.0 >= 2.14) were not met: No package 'libnautilus-extension' found No package 'gconf-2.0' found No package 'gnome-vfs-module-2.0' found Ok, so from this output I gather that indeed gtk 2 is being required to install the diff extension to nautilus. Now, my question is: Is there a possibility to integrate Meld into Nautilus? Or, are there any other diff based tool which integrate with current Nautilus? So gtk3 based. I am using Ubuntu 11.10 if there was any doubt so far. cheers and thanks in advance.

    Read the article

  • Why is Desktop Unity using the global application menu?

    - by Kazade
    It was announced in another question that the desktop version of Unity will keep the global menu by default. Here are the facts: The global menu was introduced into UNE to save vertical screen space because at Netbook resolutions the vertical space is limited. On a modern desktop with a high resolution, there is ample vertical space making this unnecessary On the announcement of UNE global menus, Mark Shuttleworth himself said the following: "There are outstanding questions about the usability of a panel-hosted menu on much larger screens, where the window and the menu could be very far apart." The benefits of a global menu don't seem to carry across to a high-resolution desktop and instead seem to bring draw backs (increased mouse travel, large distance between the menu and its associated window). The other worrying factor is that applications seem to be moving away from having a menu bar, and instead of innovating on this and defining new guidelines for moving away from the menu, we are giving it prime place right at the top of the desktop. If applications continue moving away from the desktop we will have an inconsistent experience concerning where to locate application related options/tools depending on which app you are using (e.g. Chrome). Finally, the current global menu bar implementation doesn't work for all apps, and doesn't even work for all apps in the default install. This means that the default desktop implementation will be inconsistent. So, there are a bunch of reasons why moving to a global menu is a bad idea, so we need some pretty convincing arguments for why it is a good idea. What are the reasons for the global menu implementation in the desktop version of Unity?

    Read the article

  • EclipseLink does multitenancy. Today.

    - by alexismp
    So you heard Java EE 7 will be about the cloud, but that didn't mean a whole lot to you. Then it was characterized as PaaS, something in between IaaS and SaaS. And finally it all became clear when referenced as support for multitenancy. Or did it? When it comes to JPA and persistence is general, multitenancy is defined as the ability to share a database schema among various groups of users (i.e. tenants). This means that there is no database setup or reconfiguration required as the data is co-located in the same database. EclipseLink 2.3 (the Indigo train release) let's you do just that by supporting tenant discriminator column(s) via annotations or XML with applications providing values for these discriminators via an API or PU configuration. Check out details here. EclipseLink 2.3 is scheduled to be the default and supported JPA provider for GlassFish 3.1.1. Another nice feature of this release is the ability to extend persistence units on the fly. The GlassFish Podcast has an interview up with EclipseLink's Doug Clarke. Expect more on multitenancy across the Java EE spectrum as the specification work progressed.

    Read the article

  • The Oracle Platform

    - by Naresh Persaud
    Today’s enterprises typically create identity management infrastructures using ad-hoc, multiple point solutions. Relying on point solutions introduces complexity and high cost of ownership leading many organizations to rethink this approach. In a recent worldwide study of 160 companies conducted by Aberdeen Research, there was a discernible shift in this trend as businesses are now looking to move away from the point solution approach from multiple vendors and adopt an integrated platform approach. By deploying a comprehensive identity and access management strategy using a single platform, companies are saving as much as 48% in IT costs, while reducing audit deficiencies by nearly 35%. According to Aberdeen's research, choosing an integrated suite or “platform” of solutions for Identity Management from a single vendor can have many advantages over choosing “point solutions” from multiple vendors. The Oracle Identity Management Platform is uniquely designed to offer several compelling benefits to our customers.  Shared Services: Instead of separate solutions for - Administration, Authentication, Authorization, Audit and so on–  Oracle Identity Management offers a set of share services that allows these services to be consumed by each component in the stack and by developers of new applications  Actionable Intelligence: The most compelling benefit of the Oracle platform is ” Actionable intelligence” which means if there is a compliance violation, the same platform can fix it. And If a user is logging in from an un-trusted device or we detect an attack and act proactively on that information. Suite Interoperability: With the oracle platform the components all connect and integrated with each other. So if an organization purchase the platform for provisioning and wants to manage access, then the same platform can offer access management which leads to cost savings. Extensible and Configurable: With point solutions – you typically get limited ability to extend the tool to address custom requirements. But with the Oracle platform all of the components have a common way to extend the UI and behavior Find out more about the Oracle Platform approach in this presentation. Platform approach-series-the oracleplatform-final View more PowerPoint from OracleIDM

    Read the article

  • Not allowed to upload .HTML files to my own DNN Site: Is it normal?

    - by Jake M
    My Question: Our webhost provider wont make it so we can upload .html files to our DNN site trhough the DNN File Manager page. Is that normal, should I push them to allow me to do this? We have recently transferred our website to a Dot Net Nuke run website. We originally had our website on a Linux server with Python scripts handling the backend. Obviously we now have a Windows server running .NET with ASP .NET code on the backend. Our webhost is a local Australian company. And they are saying we cant upload any .html files to the main part of the server, ie, www.ourdomain.com/Portals/0/. They are saying that the only place I can upload .html files is via FTP to this folder *www.ourdomain.com/Portals/0/html_content* This is a major problem for me because I am trying to upload my own skin which means I need to upload a main.html file to www.ourdomain.com/Portals/0/skins/myskin/ but they wont let me?! I guess what I am asking is, is this normal practice, why would they not allow this? As an experienced web admin for Linux servers and as someone who is used to being able to do whatever I want on my OWN server this is someing that really pis$%s me off!

    Read the article

  • Simple dependency tree diagram generator

    - by foampile
    I have a need to produce a simple dependency tree diagram. The input data would be in the following simple format: ITEM_NAME DEPENDENCY ---------------------------- ITEM_101 ITEM_75 ITEM_102 ITEM_77 ITEM_102 ITEM_61 ITEM_102 ITEM_11 This means that ITEM_101 depends on ITEM_75 and ITEM_102 depends on items ITEM_77, ITEM_61 and ITEM_11. So the diagram would have items ITEM_77, ITEM_61 and ITEM_11 in one vertical level and ITEM_102 would be below it with a line connecting each of the three dependencies to ITEM_102. The same would be for ITEM_101, ITEM_75 would be somewhere above it and there would be a line connecting it. In the real world this tree represents a hierarchy of scheduling jobs. We have a very extensive workload automation hierarchy in Autosys and I have heard that its front end utility has something like this tree visual representation, however, for some reason, that utility has been disabled by admins. My business users want to see this hierarchy in an easy-to-consume format. I was hoping that I won't have to program something like this from scratch because it seems like quite a common reporting requirement and the input data is simply formatted. My question is: is there a FOSS tool that takes standardized data input and produces such a hierarchical tree? Thanks

    Read the article

  • Google suddenly only indexes https and not http

    - by spender
    So all of a sudden, searches for our site "radiotuna" give out the result as an HTTPS link. https://www.google.com/?q=radiotuna#hl=en&safe=off&output=search&sclient=psy-ab&q=radiotuna&oq=radiotuna&gs_l=hp.12...0.0.0.3499.0.0.0.0.0.0.0.0..0.0.les%3B..0.0...1c.LnOvBvgDOBk&pbx=1&bav=on.2,or.r_gc.r_pw.r_qf.&fp=177c7ff705652ec3&biw=1366&bih=602 We only use https for the download of two specific files (these urls are resources used for autoupdate functionality of an app we distribute). All other parts of the site should be served over http. We wouldn't like to see any other traffic over https, nor any of our site links to appear in search engines as https. I'd like to address this issue. It seems that the following solutions are available: hand out an https specific robots.txt as such: User-agent: * Disallow: / and/or at app-level, 301 permanent redirect all requests (except the two above) to HTTP if they come in as HTTPS. My concern with the robots method is that, say (for some reason) google decided not to index http pages, disallowing https pages might mean that google has nothing left to index with disastrous consequences for our ranking. This means I'm inclined to go with a 301 redirect. Any thoughts?

    Read the article

  • How does a CS student negotiate in/after a job interview?

    - by Billy ONeal
    Alright, I've gotten to the second step in the interview process. At this point I'm working under the assumption that I might be offered a position -- flying my butt to Redmond would be quite an expense if they weren't at least considering me for something (*crosses fingers*). So, if one is offered a position, how should a CS student negotiate? I've heard a few strategies about dealing with software companies when you are being considered for a hire, but most of them are considering the developer in a powerful position. In such examinations, (s)he has lots of job experience, and may even be overqualified for what the employer is looking for. (s)he is part of a small job market of qualified developers, because 99% of applications companies receive are from those who are woefully under qualified. I'm in a completely different position. I think I compare favorably to most of my fellow students, and I have been a programmer for almost 10 years, but often I still feel green compared to most of my coworkers. I'm in a position where the employer holds most of the chips; they'd be doing me quite a favor by hiring me. I think this scenario is considerably different than the targets for most of the advice I've seen. Above all, I don't want to be such a prick negotiating that it damages my chances to actually operate in a position, even if it means not negotiating at all. How should one approach a scenario like this? P.S. If this is off topic feel free to close it -- I think it's borderline and I'm of the opinion that it's better to ask and be closed than not ask at all ;)

    Read the article

  • Some info about SD card patitions after the use of the dd statment and some oiters doubts?

    - by AndreaNobili
    I am not very experienced using Linux and I have the following situation that cause me some doubts. I have wrote RaspBian (the RaspBerry linux distribution) on an SD card using Ubuntu dd statment: sudo dd if=2014-01-07-wheezy-raspbian.img of=/dev/sdb bs=1024 So if now I perform the fdisk -l statment I obtain that I have 2 partitions related to my SD card, that are the followins: Dispositivo Boot Start End Blocks Id System /dev/sdb1 8192 122879 57344 c W95 FAT32 (LBA) /dev/sdb2 122880 5785599 2831360 83 Linux And now the first doubt: the dd statment create on the SD card two partitions: 1) /dev/sdb1 that is a litle FAT32 partition (what it means (LBA)?) 2) /dev/sdb2 that is a larger Linux ext3 partition Ok...the doubt is: why it also create to me a FAT32 partition and not only a Linux ext3 partition? Ok...if I go into my computer resource I can see a device (related to my SD card) into the devices list that contains some RaspBian file, following a screenshot: And if I see the property of this device I obtain this: So, looking at the previous screenshot it seems to me that this is the small FAT32 partition, and now I have the followings doubts: If it is the smallest FAT32 partition, what contains? The RaspBian boot or what? Why, in the devices list, I have only the FAT32 partition and not also the Linux one (/dev/sdb2), to see it have I to mount it? how? Tnx

    Read the article

  • Performing client-side OAuth authorized Twitter API calls versus server side, how much of a difference is there in terms of performance?

    - by Terence Ponce
    I'm working on a Twitter application in Ruby on Rails. One of the biggest arguments that I have with other people on the project is the method of calling the Twitter API. Before, everything was done on the server: OAuth login, updating the user's Twitter data, and retrieving tweets. Retrieving tweets was the heaviest thing to do since we don't store the tweets in our database, so viewing the tweets means that we have to call the API every time. One of the people in the project suggested that we call the tweets through Javascript instead to lessen the load on the server. We used GET search, which, correct me if I'm wrong, will be removed when v1.0 becomes completely deprecated, but that really isn't a concern now. When the Twitter API has migrated completely to v1.1 (again, correct me if I'm wrong), every calls to the API must be authenticated, so we have to authenticate our Javascript requests to the API. As said here: We don't support or recommend performing OAuth directly through Javascript -- it's insecure and puts your application at risk. The only acceptable way to perform it is if you kept all keys and secrets server-side, computed the OAuth signatures and parameters server side, then issued the request client-side from the server-generated OAuth values. If we do exactly what Twitter suggests, the only difference between this and doing everything server-side is that our server won't have to contact the Twitter API anymore every time the user wants to view tweets. Here's how I would picture what's happening every time the user makes a request: If we do it through Javascript, it would be harder on my part because I would have to create the signatures manually for every request, but I will gladly do it if the boost in performance is worth all the trouble. Doing it through Ruby on Rails would be very easy since the Twitter gem does most of the grunt work already, so I'm really encouraging the other people in the project to agree with me. Is the difference in performance trivial or is it significant enough to switch to Javascript?

    Read the article

  • Agile Data Book from O'Reilly Media

    - by Compudicted
    Originally posted on: http://geekswithblogs.net/Compudicted/archive/2013/07/01/153309.aspxAs part of my ongoing self-education and approaching of some free time, yeah, both is a must for every IT person and geek! I have carefully examined the latest trends in the Computersphere with whatever tools I had at my disposal (nothing really fancy was used) and came to a conclusion that for a database pro the *hottest* topic today is undoubtedly the #BigData and all the rapidly growing and spawning ecosystem around it. Having recently immersed myself into the NoSQL world (let me tell here right away NoSQL means Not Only SQL) one book really stood out of the crowd: Book site: http://shop.oreilly.com/product/0636920025054.doDespite being a new book I am sure it will end up on the tables of many Big Data Generalists.In a few dozen words, it is primarily for two reasons:1) The author understands that a  typical business today cannot wait for a Data Scientist for too long to deliver results demanding as usual a very quick turnaround on investments (ROI), and 2) The book covers all the needed and proven modern brick and mortar offerings to get the job done by a relatively newcomer to the Big Data World.It certainly enables such a professional to grow and expand based on the acquired knowledge, and one can truly do it very fast.

    Read the article

  • How can i install ubuntu on my ntfs hdd without formatting?

    - by Ridvan Coban
    My hdd is just one partition in ntfs (500gb) and 430 gb is used by my photos/movies/music etc which i never will want to lose. Actually i installed ubuntu on a usb flash drive (using it right now) but it is too slow that way. But my problem is : My computer is damaged ( maybe chipset or but not sure) and none of the windows versions (xp,vista,7) works on my pc. I get blue screen error as soon as windows startup logo shows. But ubuntu just works flawless. That means i cannot use wubi. I wanted to shrink my hdd without losing data (which can be done in windows) but found nothing about that on ubuntu forums. Is this possible? Or install ubuntu on my ntfs filesystem? Note : I don't have chance to backup 400 gbs of data. Sorry for my question if it's written a bit compex. I hope you get the point and someone has an idea ;)

    Read the article

  • Pros and Cons between learning to program on Windows and Linux and Macs

    - by Amumu
    I have been studying IT for 2 years and I'm going to graduate soon in this year (if everything goes well). I think it's time for me to choose a path to specialized into some fields of this large industry. Personally, I want to be a game programmer. But to be a game programmer, surely I have to invest my time to study Windows Programming, then DirectX and other programming techniques related to game. On the other hand, Linux seems promising as well. I am not sure about Game Programming on for it, but it seems become an expert for this OS, and by expert it's not about using the OS to become an administrator, but can do further than that, such as understand the OS to its essence and can produce applications for it. However, there's some obstacles in my view for this development path. Many of my friends think that Linux is based on free and open source, and if you follow it, as its name suggested: Free and Open Source, it means we also give away our software free. Otherwise, we will have to find a second job to make living. Currently, I think a viable way to make money on Linux is doing works related to client-server. Another way to developer my career is to become expert in developing business applications for companies. This is more on business, not on specialized IT fields so I am not really interested. Another alternative is programming on mobile devices, such as iPhone, Android and it seems very promising and easier to approach. Another way is to become a computer scientist and research on academic subjects such as AI, human-computer interaction, but this is far beyond my reach, so I won't invest my time on it until I feel I am experienced enough. That's all I can think of for now. I may miss a lot of things, so I need more opinions as input to get the big picture of the industry for my career path.

    Read the article

  • Almost time to hit the road again

    - by Chris Williams
    I’ve had a few months of not much traveling, but now that the weather is improving… conference season is starting up again. That means it’s time for me to start hitting the road. In June, I have Tech Ed 2010 in New Orleans, LA. I lived in New Orleans for several years, both as military and civilian and I have a few friends still down there. I haven’t been there since before Hurricane Katrina, so I have mixed feelings about returning… but I am still looking forward to it. Also in June, I have Codestock in Knoxville, TN. Codestock is one of my favorite events, primarily because of the excellent people that speak there and also attend sessions. It’s a great mix of people and technologies. Sometime in July or August, I’m headed to Austin, TX for a couple days. I don’t know the exact date yet, but if you have an event down there in that timeframe, let me know and maybe we can sort something out. In September, I’m heading to Seattle for my first PAX (Penny Arcade Expo.)  I’m going strictly as an attendee and it looks like a LOT of fun. Really excited to check it out. Also in September, I’m headed to Omaha for the Heartland Developers Conference. This is a FANTASTIC event, and certainly one of my local favorites. (I guess local is relative, it’s about a 6 hour drive.) In addition to speaking on WP7, I’ll be doing a series of hands on labs on XNA they day before the conference starts, so that should be a lot of fun as well.   In addition to all this stuff, I have my own XNA User Group to take care of. In August, Andy “The Z-Man” Dunn is coming to speak and check out the various food on a stick offerings at the Minnesota State Fair!

    Read the article

  • Central renderer for a given scene

    - by Loggie
    When creating a central rendering system for all game objects in a given scene I am trying to work out the best way to go about passing the scene to the render system to be rendered. If I have a scene managed by an arbitrary structure, i.e., an octree, bsp trees, quad-tree, kd tree, etc. What is the best way to pass this to the render system? The obvious problem is that if simply given the root node of the structure, the render system would require an intrinsic knowledge of the structure in order to traverse the structure. My solution to this is to clip all objects outside the frustum in the scene manager and then create a list of the objects which are left and pass this simple list to the render system, be it an array, a vector, a linked list, etc. (This would be a structure required by the render system as a means to know which objects should be rendered). The list would of course attempt to minimise OpenGL state changes by grouping objects that require the same rendering operations to be performed on them. I have been thinking a lot about this and started searching various terms on here and followed any additional information/links but I have not really found a definitive answer. The case may be that there is no definitive answer but I would appreciate some advice and tips. My question is, is this a reasonable solution to the problem? Are there any improvements that I could make? Are there any caveats I should know about? Side question: Am I right in assuming that octrees, bsp trees, etc are all forms of BVH?

    Read the article

  • How to auto-mount a copied encrypted home

    - by LedZ
    How can I auto-mount and use my encrypted home that I copied to another partition on the same hard disk? I'm running Ubuntu 11.10. My encrypted home is on sda1. There I've 2 users: userA and userB. Another partition is sda3 on which I have some other Data. BTW, sda1 is formatted as EXT4, sda3 is formatted as EXT3. I did the following: I logged out from GUI (Gnome) and changed (using Ctrl+Alt+F1) to the shell. From there I logged in, changed to sudo (using sudo -s) . After then I created a new mountpoint (tmp) under /mnt (mkdir /mnt/tmp) mounted /dev/sda3 on that mountpoint /mnt/tmp (mount /dev/sda3 /mnt/tmp) copied my encrypted /home to /mnt/tmp using rsync (rsync --acvxASXH --progress --stats /home/ /mnt/tmp/). After the “copy-procedure” I looked to my “new home” in /mnt/tmp and there I found the following 3 folders: userA, userB, .ecryptfs My structure for /dev/sda3 mounted on /mnt/tmp looks like the following (userB in ecryptfs I've not listed): -userA ¦ +userB ¦ +.ecryptfs ¦ +userA ¦ + auto-mount ¦ + auto-umount ¦ + Private.mnt ¦ + Private.sig ¦ + wrapped-passphrase ¦ + .wrapped-passphrase.recorded ¦ + .Private + (encrypted file_1) + (encrypted file_2) + (encrypted file_n) Now I would like that this copy of the original home-directory should act with the same behavior as the original home-directory means, that it should be auto-mounted at reboot and give me access to my unencrypted files and after logout all my files should be encrypted again. Any suggestions?

    Read the article

  • Encrypt SSD or not?

    - by JamesBradbury
    My desktop machine is running Ubuntu 12.04 (and will probably stay with it until the next LTS). I've got a new 120GB SSD on the way as my existing 420GB spinning disk. If it makes any difference I'll be dual-booting with Windows 7 across both disks too. I've read some helpful answers here about /home setup and enabling TRIM, which I intend to follow. So most of my /home will be on the SSD, with only photos, videos and music on the spinning disk. The question is, when I reinstall Ubuntu from CD or USB, whether I should encrypt the SSD? Specifically: I'm reading that drive wear isn't much of an issue with modern SSDs as they last decades even if you spam them. Is this true? How big a performance reduction will encrypting cause (I have an i7 Sandybridge, so I guess it can cope)? Is it more important from a security point of view to encrypt an SSD? I think I read somewhere that it may be hard to reliably wipe data. By all means answer even if you only know about one of those things.

    Read the article

  • Purpose oriented user accounts on a single desktop?

    - by dd_dent
    Starting point: I currently do development for Dynamics Ax, Android and an occasional dabble with Wordpress and Python. Soon, I'll start a project involving setting up WP on Google Apps Engine. Everything is, and should continue to, run from the same PC (running Linux Mint). Issue: I'm afraid of botching/bogging down my setup due to tinkering/installing multiple runtimes/IDE's/SDK's/Services, so I was thinking of using multiple users, each purposed to handle the task at hand (web, Android etc) and making each user as inert as possible to one another. What I need to know is the following: Is this a good/feasible practice? The second closest thing to this using remote desktops connections, either to computers or to VM's, which I'd rather avoid. What about switching users? Can it be made seamless? Anything else I should know? Update and clarification regarding VM's and whatnot: The reason I wish to avoid resorting to VM's is that I dislike the performance impact and sluggishness associated with it. I also suspect it might add a layer of complexity I wish to avoid. This answer by Wyatt is interesting but I think it's only partly suited for requirements (web development for example). Also, in reference to the point made about system wide installs, there is a level compromise I should accept as experessed by this for example. This option suggested by 9000 is also enticing (more than VM's actually) and by no means do I intend to "Juggle" JVMs and whatnot, partly due to the reason mentioned before. Regarding complexity, I agree and would consider what was said, only from my experience I tend to pollute my work environment with SDKs and runtimes I tried and discarded, which would occasionally leave leftovers which cause issues throught the session. What I really want is a set of well defined, non virtualized sessions from which I can choose at my leisure and be mostly (to a reasonable extent) safe from affecting each session from the other. And what I'm really asking is if and how can this be done using user accounts.

    Read the article

  • UPDATE FOR BI PUBLISHER ENTERPRISE 10.1.3.4.2 NOVEMBER 2011

    - by Tim Dexter
    It's Friday, that means its patch release time. Why do we do this to ourselves, 'we'll release on Friday!' It might 11.59 on Friday but by golly we'll have released on Friday. I can remember a release of BIP years ago that for some reason we went for 12/31 as a release date ... were we mad? I seem to remember we made it but talk about ridiculous pressure! The latest 10g rollup is out in the wild and available from Oracle support. A bug fixing rollup but worth getting to and know that support will want you to get to it and re-test before going forward on an SR. One simple but very useful fix or enhancement:[Cause of the bug] @ ================== @ Customer reports that despite the clock being shown, end users are clicking @ on the View button repeatedly as the initial generation is taking some time.   @ If the button were to be grayed out then  this would prevent the users @ requesting the report more than  once.  Repeated requests are causing a @ system overload and as this is their Production  instance this is extremely @ important to the customer. @ . @ [The Fix] @ ========= @ Added the logic to disable the button after the user clicks on the "view" @ button and re-enable it when the report is loaded. I told a group of customers once that they have a headache and we have a non-steroidal anti-inflammatory drug, alright, I actually said 'aspirin'. This little gem of a fix helps relieve another little headache that our aspirin was causing. The patch number for all this BIP pain killing is 13399232, enjoy!

    Read the article

  • to upgrade / install wine 1.55 on ubuntu 12.04 or any distro

    - by user67550
    Wine news and installation 1.5.5 on ubuntu Now available in PPA wine version 1.55, an application that lets Windows programs run on any distribution with GNU / Linux. Wine (recursive acronym in English for Wine Is Not an Emulator, which means "Wine Is Not an Emulator") is a reimplementation of the application programming interface for Win16 and Win32 operating systems based on Unix. Allows execution of programs designed for MS-DOS and Microsoft Windows Version 3.11, 95, 98, Me, NT, 2000, XP, Vista and 7 The name was originally an acronym Wine Windows Emulator. This meaning was later changed to the current recursive acronym. These are some of the highlights: Support for the installation of Mono as a complement to wine in the package. The dither pattern brushes in the motor DIB. Support to install the runtime. NET 4.0. D3dx9 DDS files supported. Several bug fixes. To install on Ubuntu just open the console and type: sudo add-apt-repository ppa: ubuntu-wine/ppa sudo apt-get update sudo apt-get install wine1.5 Source: ubuntutips If you enjoyed this post share it with your friends, thanks

    Read the article

  • Adjust sprite bounds of the visible part of texture

    - by Crazy D0G
    Is there any way to adjust the boundaries of the visible part of the sprite? To make it easier to understand: I have a texture, such as shown at figure 1. Then I break it into pieces and fill the resulting fragments using PRKit (wood texture on figure 2 and 3). But the resulting fragments have the transparent (green color on figure 2 and 3) and when creating a sprite from the fragments they have the size of the initial texture. Is there a way to get rid of this transparency and to adjust the size of the visible part (wood texture), openGL or cocos2d-x means? Maybe it help - draw() method from PRKit: void PRFilledPolygon::draw() { //CCNode::draw(); glDisableClientState(GL_COLOR_ARRAY); // we have a pointer to vertex points so enable client state glBindTexture(GL_TEXTURE_2D, texture->getName()); glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_ONE_MINUS_SRC_ALPHA); glVertexPointer(2, GL_FLOAT, 0, areaTrianglePoints); glTexCoordPointer(2, GL_FLOAT, 0, textureCoordinates); glDrawArrays(GL_TRIANGLES, 0, areaTrianglePointCount); glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE); //Restore texture matrix and switch back to modelview matrix glEnableClientState(GL_COLOR_ARRAY);}

    Read the article

< Previous Page | 168 169 170 171 172 173 174 175 176 177 178 179  | Next Page >