Search Results

Search found 22481 results on 900 pages for 'andy may'.

Page 129/900 | < Previous Page | 125 126 127 128 129 130 131 132 133 134 135 136  | Next Page >

  • Can I use the snipping tool to take a screenshot of the windows 8 start screen or modern apps?

    - by Journeyman Geek
    One of the tools I've found invaluable in answering questions on SU is the snipping tool. I may on occasion need to take screenshots of part of the start screen or 'modern' apps. I may not want to take a complete screenshot, and while I can use PrtSc and switch back into desktop to paste it, this is clunky if I need to document a multi-step process. Can I use the snipping tool on modern apps or the start screen? If not is there a configurable way to save a series of screenshots to a fixed folder, say when I press a combination of keys, so I can work, screenshot, then crop and annotate the folder of images?

    Read the article

  • Simple Introduction to using the Enterprise Manager SOA/BPM Facade API by Jaideep Ganguli

    - by JuergenKress
    There may be times when you need to expose just a small section of what is displayed in the Enterprise Manager console for SOA/BPM (EM console). A simple example can be where stakeholders on the systems integration or customer teams want to monitor a dashboard of statistics on how many instances of a composite have been created and how many have faulted. You can see this in the EM, as shown below Some of these stakeholders may not have knowledge of  EM console and they just want a quick view into the statistics, without having to navigate EM. This post describes how to use the Oracle Fusion Middleware Infrastructure Management Java API  for Oracle SOA Suite (also called the Facade API)  to build a custom ADF page to display this information. If you want a quick introduction in using the Facade API, this post is for you. Read the complete article here. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Technorati Tags: Enterprise Manager,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • Ternary and Artificial Intelligence

    - by user2957844
    Not much of a programmer myself, however I have been thinking about the future of AI. If a fully functional AI is programmed in a binary environment as is used in current computing, would that create a bit of a black and white personality? As in just yes/no, on/off, 1/0? I will use the Skynet computer from the Terminator series as a bad analogy; it is brought online and comes to the conclusion that humanity should just be destroyed so the problem is resolved, basically its only options were; fire the missiles or not. (The films do not really go into what its moves would be after doing such a thing, but that goes into the realms of AI evolution so does not really fit with this question.) It may also have been badly programmed. Now, the human mind has been akin to a ternary system which allows our "out of the box" thinking along with all the other wonderful things our minds can do. So, would it not be more prudent to create a functional ternary system and program an AI using it so the resulting personality would be able to benefit from the third "maybe" (so to speak) option? I understand that in binary there are ways to get around the whole yes/no etc. way of things, however the basic operations are still just 1's and 0's. Again with using the above bad Skynet analogy; if it could have had that third "maybe" option as part of its core system, it may have decided to not launch due to being able to make sense of the intricacies of human nature and the politics of such a move etc. In effect, my question is; Would an AI benefit more from ternary computing as opposed to binary due to the inclusion of -1, or 2, dependent on the system ("maybe," as I call it)?

    Read the article

  • Remove Windows 7's limitation on number of concurrent tcp connections (http web requests)

    - by Ghita
    I have an application that tries to open as many http requests as possible (in order to stress test a proxy implementation) It seems to me that Win7 (SP1) may have a limitation on number of concurrent opened connection (it may be the so called half-open state if I'm not wrong). Is there something I can do for client ? and also I test using a vista PC that acts as a proxy server. It would be great if I could configure it to sustain at least 50 new connections initiated / second on client side and many more on server. I made the modification according to this technet article by setting TcpNumConnections = 150 but it doesn't make a difference. I still only see about 20 tcp sockets associated with my http client by using tcpview.

    Read the article

  • Grub menu not waiting despite of GRUB_TIMEOUT=10

    - by Optimus
    I have Ubuntu 12.04 installed along side of windows 7. The grub menu doesn't seem obey GRUB_TIMEOUT=10, I see the grub menu there for a split second and it immediately defaults to the first option. Grub menu worked fine when I first installed ubuntu. I am not able to pinpoint what exactly broke it(maybe some update?). I did resize my ubuntu partition using gparted but am not sure if that is what caused it. here are my settings from etc/default/grub GRUB_DEFAULT=0 #GRUB_HIDDEN_TIMEOUT=0 #GRUB_HIDDEN_TIMEOUT_QUIET=true GRUB_TIMEOUT=10 GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian` GRUB_CMDLINE_LINUX_DEFAULT="quiet splash" GRUB_CMDLINE_LINUX="" How do I fix this? Edit: As suggested by 'kamil' this is what I have tried so far with no luck - 1) hold the shift key while booting 2) sudo gedit /etc/default/grub edit GRUB_TIMEOUT to `GRUB_TIMEOUT=10` sudo update-grub 3) sudo gedit /etc/default/grub edit GRUB_TIMEOUT to `GRUB_TIMEOUT=10` sudo update-grub2 4) at the end of your /etc/grub.d/00_header file, comment out the if condition except for the regular set timeout line like this: #if [ \${recordfail} = 1 ]; then # set timeout=-1 #else set timeout=${GRUB_TIMEOUT} #fi then sudo update-grub and sudo update-grub2 5) install boot repair sudo add-apt-repository ppa:yannubuntu/boot-repair sudo apt-get update sudo apt-get install -y boot-repair boot-repair boot-repair output - Boot successfully repaired. ... The boot files of [The OS now in use - Ubuntu 12.04.1 LTS] are far from the start of the disk. Your BIOS may not detect them. You may want to retry after creating a /boot partition (EXT4, 200MB, start of the disk). This can be performed via tools such as gParted. Then select this partition via the [Separate /boot partition:] option of [Boot Repair]. (https://help.ubuntu.com/community/BootPartition) http://paste.ubuntu.com/1220468/ - here is the full boot-repair data Could grub files not being at the start of the disk create such issues?

    Read the article

  • CentOS 5.5 Package documentation

    - by fthinker
    Usually when I install a common package like PostgreSQL or MySQL or Python etc using Yum it installs the files held within those packages into locations specific to CentOS itself. It may also install scripts specific to CentOS only. These paths may not be the same as the defaults found within the source distributions found on the PostgreSQL, MySQL, Python etc project websites and the scripts are usually unique to CentOS. Recently when I installed PostgreSQL under Ubuntu I found some very nice distribution specific information about how the install was organized and how to use the package in a Ubuntu way. I found this information in /usr/share/doc/ Is there any such information included within CentOS?

    Read the article

  • Is it possible to cause artificial network packet loss or latency?

    - by nbolton
    I'm trying to reproduce some issues on a deployed application where the MSSQL server and client are running in two separate machines. I think there may be network issues between the two machines, so I'd like to try and reproduce these conditions on two Hyper-V virtual machines (on the same virtual server). Of course, the network for these virtual machines is "local" so it's actually far from the conditions in a live environment. Is there a program I can run on either virtual machine which will degrade the network performance? Or maybe any other work arounds? For example, one way to reproduce the conditions may be to run the VMs on separate Hyper-V servers in geographically dispersed locations (so the SQL traffic goes over VPN or something) -- but this is a little long winded I think. There must be a simpler way.

    Read the article

  • How to install libcrypt-ssleay-perl in Ubuntu?

    - by Deqing
    When I tried to install libcrypt-ssleay-perl, it says: $ sudo apt-get install libcrypt-ssleay-perl Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: libcrypt-ssleay-perl : Depends: perlapi-5.12.4 but it is not installable E: Unable to correct problems, you have held broken packages. This perlapi-5.12.4 is actually a virtual package provided by perl-base, which I had already installed: $ dpkg -l|grep perl-base ii perl-base 5.14.2-6ubuntu2.1 minimal Perl system So what should I do to install libcrypt-ssleay-perl now?

    Read the article

  • Can we use Google Earth images by applying our Unity3D mesh ?

    - by Jake M
    We are developing a commercial app for iOS and Android. The app will display development plans(architectural drawings) in a real world 3D environment. The app will work by creating a Unity3D mesh, applies a google earth image as the texture then draws out 3d lines(architectural drawings) over the Unity Mesh. Question: We are unsure if this is allowed under Google's terms and agreements? See quoted text below. In the bold text below its a little vague whether what we are doing(explained above) is violating their terms and agreements. What do you think? Does anyone know how we contact google to ask them? You may not mass download or use bulk feeds of any Content, including but not limited to extracting numerical latitude or longitude coordinates, geocoding, text-based directions, imagery, visible map data, or Places data (including business listings) for use in other applications. You also may not trace Google Maps or Earth as the basis for tracing your own maps or geographic materials. For full details, please read section 10.3.1 of the Maps/Earth API Terms of Service. Does anyone have any advice/experience dealing with this stuff?

    Read the article

  • Amazon and bandwidth limits

    - by Dave
    This question may sound weird to some of you but I have never really used cloud and above all I'm still beginner in the web development and would be really thankful if someone could answer few of my question though they may sound weird So I would like to deploy simple website to Amazon, however, I'm concerned about bandwidth as they charge 0.12GB and I'm not able to set budget limit. My problem is that I wouldn't like to pay for 1000GB of bandwidth if someone for some reason decides to download one file constantly So could some of you, who have experience with amazon, tell what happens if my app is able to handle (say 50 req/sec 30kb/page) does that mean that in the worst case I would have to pay req * sec * min * hours * days * page size 50 * 60 * 60 * 24 * 30 * 30kb = 3888GB

    Read the article

  • What is the reason for: "Depends: libatk1.0-0 (>= 1.29.3) but 1.28.0-0ubuntu1 is to be installed"

    - by solomongaby
    I am trying to install Filezilla from this repository: https://launchpad.net/~yofel/+archive/ppa And after sudo apt-get update, I tried to install it but I get this error: Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: filezilla: Depends: libatk1.0-0 (>= 1.29.3) but 1.28.0-0ubuntu1 is to be installed Do you have any idea what is happening?

    Read the article

  • State Changes in a Component Based Architecture [closed]

    - by Maxem
    I'm currently working on a game and using the naive component based architecture thingie (Entities are a bag of components, entity.Update() calls Update on each updateable component), while the addition of new features is really simple, it makes a few things really difficult: a) multithreading / currency b) networking c) unit testing. Multithreading / Concurrency is difficult because I basically have to do poor mans concurrency (running the entity updates in separate threads while locking only stuff that crashes (like lists) and ignoring the staleness of read state (some states are already updated, others aren't)) Networking: There are no explicit state changes that I could efficiently push over the net. Unit testing: All updates may or may not conflict, so automated testing is at least awkward. I was thinking about these issues a bit and would like your input on these changes / idea: Switch from the naive cba to a cba with sub systems that work on lists of components Make all state changes explicit Combine 1 and 2 :p Example world update: statePostProcessing.Wait() // ensure that post processing has finished Apply(postProcessedState) state = new StateBag() Concurrently( () => LifeCycleSubSystem.Update(state), // populates the state bag () => MovementSubSystem.Update(state), // populates the state bag .... }) statePostProcessing = Future(() => PostProcess(state)) statePostProcessing.Start() // Tick is finished, the post processing happens in the background So basically the changes are (consistently) based on the data for the last tick; the post processing can a) generate network packages and b) fix conflicts / remove useless changes (example: entity has been destroyed - ignore movement etc.). EDIT: To clarify the granularity of the state changes: If I save these post processed state bags and apply them to an empty world, I see exactly what has happened in the game these state bags originated from - "Free" replay capability. EDIT2: I guess I should have used the term Event instead of State Change and point out that I kind of want to use the Event Sourcing pattern

    Read the article

  • Trouble installing gnome-shell-extensions-user-theme, dependency/PPA conflict?

    - by Drex
    I installed gnome tweak tool, and am trying to set up custom themes and whatnot. So, trying to install gnome-shell-extensions-user-theme. me@computer:~$ sudo apt-get install gnome-shell-extensions-user-theme [sudo] password for me: Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: gnome-shell-extensions-user-theme : Depends: gnome-shell-extensions-common but it is not going to be installed E: Unable to correct problems, you have held broken packages. Not going to be installed? Okay, let's see about that... me@computer:~$ sudo apt-get install gnome-shell-extensions-common Reading package lists... Done Building dependency tree Reading state information... Done gnome-shell-extensions-common is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. Wait, what? Broken packages? Ruh Roh! Seems to me it might be a PPA contradiction problem or something, but I'm tired of trashing my installs. Kinda lost here. Any ideas? Output of sudo apt-get install -f drex@U110:~$ sudo apt-get install -f Reading package lists... Done Building dependency tree Reading state information... Done 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.

    Read the article

  • In a specification, should I describe what a product does (ideally) or what it should/must do?

    - by Arlaud Pierre
    I'm writting a German specification (I'm not German). Differences may appear for this process in different cultures, especially in the terminology, but usually here's the idea: The client writes his needs and wishes in a document, called a scope statement or requirements document. The supplier tries to understand the actual need of the client (which might be different to what was written and to what the client meant to say and to what the client thinks he needs, etc.) The supplier writes a specification for the product, which should fill the client's need. The specification needs to be precise enough for the product to be made (ambiguity problems occur). The client and the supplier can check whether they have understood each other, and discuss details of the product. The client agrees with the specification (or at least its current iteration) and the supplier is ready to start the work. (it may of course be expected of you to disagree with this process, but this is irrelevant to my problem): I'm now somewhere around the last two steps and I've been criticized because I wrote what the product must do, and not what it will do ideally. Usually along the lines of The product must be able to perform task A And I was expected to write The product performs task A This is a simple word play, but I feel saying what the product does, while the product isn't even on the way to be made yet, is wrong. I would tend to consider a specification as a contract of what the product is expected to do (what it must do and how it should do it), and not what it does. Said differently, I feel this is the specification and not the manual of the end product…… Should I say what the product must do or what it does?

    Read the article

  • Guidelines for creating referentially transparent callables

    - by max
    In some cases, I want to use referentially transparent callables while coding in Python. My goals are to help with handling concurrency, memoization, unit testing, and verification of code correctness. I want to write down clear rules for myself and other developers to follow that would ensure referential transparency. I don't mind that Python won't enforce any rules - we trust ourselves to follow them. Note that we never modify functions or methods in place (i.e., by hacking into the bytecode). Would the following make sense? A callable object c of class C will be referentially transparent if: Whenever the returned value of c(...) depends on any instance attributes, global variables, or disk files, such attributes, variables, and files must not change for the duration of the program execution; the only exception is that instance attributes may be changed during instance initialization. When c(...) is executed, no modifications to the program state occur that may affect the behavior of any object accessed through its "public interface" (as defined by us). If we don't put any restrictions on what "public interface" includes, then rule #2 becomes: When c(...) is executed, no objects are modified that are visible outside the scope of c.__call__. Note: I unsuccessfully tried to ask this question on SO, but I'm hoping it's more appropriate to this site.

    Read the article

  • ASP Fails with 500 Error

    - by VinceM
    We have a server setup as an IIS box and have some static pages with a few asp pages that handle the form submissions. The asp is really vbscript that sends a CDO message. When moving these pages to the new server the form will not submit, it gives a 500 error and the following shows in Event Viewer: Error: The Template Persistent Cache initialization failed for Application Pool 'DefaultAppPool' because of the following error: Could not create a Disk Cache Sub-directory for the Application Pool. The data may have additional error codes.. I can't seem to find any info on this anywhere... I was thinking it may have something to do with the fact that we created this server from an image of another server. Thanks for your help in advance... Vince

    Read the article

  • Solutions for iOS collaborative sync (iCloud CoreData, CouchDB)?

    - by mluisbrown
    I'm developing an iOS app where one of the features will be allowing users to share and collaborate on data (e.g. lists). From everything I've read and based on the way that iCloud CoreData sync works I assume that it would not be a good fit for the following reasons, but I wanted to make sure I wasn't missing anything, as I'd prefer not to use a 3rd party syncing solution if at all possible: iCloud sync of any kind (CoreData, Document or Key / Value pairs) can only ever be between devices that use the same iCloud account, so it's designed for a single user syncing data over multiple devices. Any kind of collaborative sync (several people editing the same document / list) simultaneously would be limited to everyone have the same iCloud account. Cases of people sharing the same iCloud account is usually limited to, for example, husband and wife or similar close relationships for a small number of people. iCloud Core Data sync is for ensuring that each sync'd device has the same data. It doesn't seem to allow syncing just a subset of the data, so scenarios in which each user has their own documents and is only sharing / collaborating on a subset of them are not supported. And I'm not even mentioning the well document problems with iCloud CoreData syncing which may or may not have been resolved with iOS 7. Given the above, it would seem that CouchDB (with TouchDB) would be a better option, as it seems to support everything I need. What other options are there that people can recommend?

    Read the article

  • Most efficient Implementation a Tree in C++

    - by Topo
    I need to write a tree where each element may have any number of child elements, and because of this each branch of the tree may have any length. The tree is only going to receive elements at first and then it is going to use exclusively for iterating though it's branches in no specific order. The tree will have several million elements and must be fast but also memory efficient. My plan makes a node class to store the elements and the pointers to its children. When the tree is fully constructed, it would be transformed it to an array or something faster and if possible, loaded to the processor's cache. Construction and the search on the tree are two different problems. Can I focus on how to solve each problem on the best way individually? The construction of has to be as fast as possible but it can use memory as it pleases. Then the transformation into a format that give us speed when iterating the tree's branches. This should preferably be an array to avoid going back and forth from RAM to cache in each element of the tree. So the real question is which is the structure to implement a tree to maximize insert speed, how can I transform it to a structure that gives me the best speed and memory?

    Read the article

  • "Meet in the middle" with SSH

    - by stillinbeta
    I have an interesting question regarding SSH. I have a machine at school that I'd like to be able to access from elsewhere. It's behind a firewall/NAT, so I can't get at it directly. I have a leased web server that I can SSH into from anywhere. I was wondering if I could do some voodoo with port forwarding to get to my machine at school via the web server. I think this comes down to whether you can do SSH "backwards," which may or may not be possible. Basically: Machine A can access Machine B Machine C can also access Machine B How can Machine A access Machine C?

    Read the article

  • Playing games over RDP and utilizing other res of powerfull PC... [closed]

    - by Alex
    Possible Duplicate: Is it possible to run games over remote desktop? Hey guys, I have the one question..that may be not usual but very interesting.. Well, I have laptop and some powerfull PC, now I want to utilize ALL energy of the powerfull PC on my notebook.. for example, run games like F.E.A.R on powefull PC and next play it over the Remote Desktop on my laptop.. both PC may be connected lan < lan, or thru Wi-Fi, or FireWare, or (?) any other way that does not matter.. Google told me that it won't work over RDP due to protocol lacks and there should be many bump on the road on that way.. but, maybe , you guys will give me the right point ? Let's formalzie.. we'd like to utilize all resources of one PC on another PC via network, how we should do that ? Any ideas?

    Read the article

  • How to prevent dual booted OSes from damaging each other?

    - by user1252434
    For better compatibility and performance in games I'm thinking about installing Windows additionally to Linux. I have security concerns about this, though. Note: "Windows" in the remaining text includes not only the OS but also any software running on it. Regardless of whether it comes included or is additionally installed, whether it is started intentionally or unintentionally (virus, malware). Is there an easy way to achieve the following requirements: Windows MUST NOT be able to kill my linux partition or my data disk neither single files (virus infection) nor overwriting the whole disk Windows MUST NOT be able to read data disk (- extra protection against spyware) Linux may or may not have access to the windows partition both Linux and Windows should have full access to the graphics card this rules out desktop VM solutions for gaming I want the manufacturer's windows graphics card driver Regarding Windows to be unable to destroy my linux install: this is not just the usual paranoia, that has happened to me in the past. So I don't accept "no ext4 driver" as an argument. Once bitten, twice shy. And even if destruction targeted at specific (linux) files is nearly impossible, there should be no way to shred the whole partition. I may accept the risk of malware breaking out of a barrier (e.g. VM) around the whole windows box, though. Currently I have a system disk (SSD) and a data disk (HDD), both SATA. I expect I have to add another disk. If i don't: even better. My CPU is a Intel Core i5, with VT-x and VT-d available, though untested. Ideas I've had so far: deactivate or hide other HDs until reboot at low level possible? can the boot loader (grub) do this for me? tiny VM layer: load windows in a VM that provides access to almost all hardware, except the HDs any ready made software solution for this? Preferably free. as I said: the main problem seems to be to provide full access to the graphics card hardware switch to cut power to disks commercial products expensive and lots of warnings against cheap home built solutions preferably all three hard disks with one switch (one push) mobile racks - won't wear of daily swapping be a problem?

    Read the article

  • Which powerful laptop, with UK keyboard and 8gb ram

    - by RobinL
    I've been searching high and low for high spec laptops compatible with Ubuntu. The lack of coherent information on the topic is high (considering the number of people who apparently want a good laptop with an OS operating system). So I thought you may have some advice. My requirements: a) has = 8Gb ram b) is compatible with Ubuntu c) has a UK keyboard and charger d) does not cost the Earth Which would you go for? Does anyone have good experience with high-end laptops running Ubuntu? So here's some background research: Samsung Series 7 looks great, but has various problems on Ubuntu, including: poor battery life, touchpad does not work, graphics card not fully supported and sucks power when it does (see [here] and [here], for example). Other options on the [wish list] include: the sensible [Acer] (possibly n.1 choice, but not sure about graphics card compatibility or battery), a nice looking [HP Pavilion dv6-6c56ea], which also has incompatibility issues (see [here] and [here] and check ubuntuforums) And another [Acer] which may be best due to its simplicity and cheapness. Other sub-questions: didn't Dell offer Ubuntu support for decent laptops (above 6Gb ram their offerings are scarce); what about pre-installed options such as those provided by System76? If it weren't for the UK keyboard and charger, I'd probably go for this [amazing-looking] [machine]. Many thanks for any advice, P.s. Apologies for lack of hyperlinks; I'm a noob so only allowed 2 :( All 10 links are available here though for the interested reader :) Robin

    Read the article

  • How to delete a folder in python when [Error 32] is present

    - by harish
    I am using python 2.7. I want to delete a folder which may or may not be empty. The folder is handled by thread for file-monitoring. I am not able to kill thread but wanted to delete this folder any how. I tried with os.rmdir(Location) shutil.rmtree(Location) os.unlink(Location) But, it didn't work. It is showing error as [Error 32] The process cannot access the file because it is being used by another process: 'c:\\users\\cipher~1\\appdata\\local\\temp\\fis\\a0c433973524de528420bbd56f8ede609e6ea700' I want to delete folder a0c433973524de528420bbd56f8ede609e6ea700 or delete whole path will also suffice.

    Read the article

  • Where to find information about ubuntu compatible or certified hardware/PC models

    - by Halkinn
    I am buying a new desktop PC in early 2013, anyway this question should apply to someone intending to buy a new laptop/ultrabook as well. This machine is not meant for gaming, and if I ocasionally do it, I can survive with minimum graphics. However I may need some heavy multimedia edition or multitasking at times, so basically my greatest priority is a good processor, after that perhaps average graphic card (if onboards are not enough, I am still not informed enough about that), at least 4GB of RAM with possibility of expansion. I know there are some PC models specially designed to ship with Ubuntu, which is the OS I use the most these days. However, most people around me use Windows and some software with unsupported versions for Linux and not having a Windows license becomes a bit problematic. Given that, I would like to find information about which PC models or even manufacters currently on the market have the best compatibility with Ubuntu, I am still undecided between building my own desktop or buying a pre-made model, so I would like to find information both for certified models and certified hardware or even Ubuntu partners that may work closely with Canonical. Where to find this information in order to make sure that I will have a good experience with Ubuntu on my new PC in the years to come?

    Read the article

  • I use windows7 and am looking for a tool to help me tag/organize my bookmarks as well as my thoughts and projects.

    - by tomcat23
    I've got my bookmarks in Chrome currently, and am prone to just bookmarking something if it contains any info I may need for the project I'm currently working on. I had found a way to tag them with delicious and then get an export from that into a wordpress test server (with the tags), having a post for each bookmark, but this proved to be a bit of a waste of time, as there's no way to organize it well. Ideally I'd like to find some sort of mind-mapper with a Prezi-like view, that does auto tagging, excerpting and allows me to notate things effectively. Does my dream tool even exist? I've usually got 20+ tags open all at once because there may be something open on each I need to see/know to make my current project work. It's frustrating. Though I'm on Win7, I'm interested to hear about any tools that are out there that work to take your existing bookmarks and help you organize them productively.

    Read the article

< Previous Page | 125 126 127 128 129 130 131 132 133 134 135 136  | Next Page >