Search Results

Search found 8785 results on 352 pages for 'bug reporting'.

Page 204/352 | < Previous Page | 200 201 202 203 204 205 206 207 208 209 210 211  | Next Page >

  • Powershell overruling Perl binmode?

    - by hippietrail
    I have a Perl script which creates a binary file while scanning a very large text file. It outputs to STDOUT which I redirect in the commandline to a file. To optimize it I'm making changes then seeing how low it takes to run. On Linux for this I use the "time" command. On Windows the best way to time a program seemed to be to PowerShell's "measure-command". This seemed to work fine but I noticed the generated files were larger. On examination I found that the files generated from within PowerShell begin with a BOM and contain CRLF pairs! My Perl script has a "binmode STDOUT" directive and does work correctly in a normal dosbox. Is this a bug or misfeature in PowerShell or measure-command? Has it affected others creating binary files by means other than Perl? Googling hasn't turned anything up so far. I'm using Perl 5.12, PowerShell v1.0 and Windows XP.

    Read the article

  • Paging enormous tables on DB2

    - by grenade
    We have a view that, without constraints, will return 90 million rows and a reporting application that needs to display paged datasets of that view. We're using nhibernate and recently noticed that its paging mechanism looks like this: select * from (select rownumber() over() as rownum, this_.COL1 as COL1_20_0_, this_.COL2 as COL2_20_0_ FROM SomeSchema.SomeView this_ WHERE this_.COL1 = 'SomeValue') as tempresult where rownum between 10 and 20 The query brings the db server to its knees. I think what's happening is that the nested query is assigning a row number to every row satisfied by the where clause before selecting the subset (rows 10 - 20). Since the nested query will return a lot of rows, the mechanism is not very efficient. I've seen lots of tips and tricks for doing this efficiently on other SQL platforms but I'm struggling to find a DB2 solution. In fact an article on IBM's own site recommends the approach that nhibernate has taken. Is there a better way?

    Read the article

  • How do I get nginx to issue 301 requests to HTTPS location, when SSL handled by a load-balancer?

    - by growse
    I've noticed that there's functionality enabled in nginx by default, whereby a url request without a trailing slash for a directory which exists in the filesystem automatically has a slash added through a 301 redirect. E.g. if the directory css exists within my root, then requesting http://example.com/css will result in a 301 to http://example.com/css/. However, I have another site where the SSL is offloaded by a load-balancer. In this case, when I request https://example.com/css, nginx issues a 301 redirect to http://example.com/css/, despite the fact that the HTTP_X_FORWARDED_PROTO header is set to https by the load balancer. Is this an nginx bug? Or a config setting I've missed somewhere?

    Read the article

  • Experience with AMCC 3ware 9650se raid cards? Ours seems dead

    - by antiduh
    We have a 8-port 3ware 9650se raid card for our main disk array. We had to bring the server down for a pending power outage, and when we turned the machine back on, the raid card never started. This card has been in service for a couple years without problems, and was working up until the shutdown. Now, when we turn the machine on, the bios option rom that normally kicks in before the bootloader doesn't show up, none of the drives start, and when the OS tries to access the device, it just times out. The firmware on it has been upgraded in the past, so it's possible we've hit some sort of firmware bug. We're using it in a Silicon Mechanics R272 machine with gentoo for the OS. The OS eventually boots, but alas, without the card. We've ordered a new one, but I'm worried that if we replace the card it won't recognize the existing array. Has anybody performed a card swap before? Any help would be greatly appreciated.

    Read the article

  • Multi-monitor aterm transparency

    - by Bryan Ward
    I have 3 monitors which I set the background with using xpmroot my-5760x1200bg.png I then setup aterm to use transparency by adding the following to my ~/.Xdefaults file. aterm*transparent:true aterm*shading:60 aterm*background:Black aterm*foreground:White aterm*scrollBar:true aterm*scrollBar_right:true aterm*transpscrollbar:true aterm*saveLines:32767 aterm*font:*-*-fixed-medium-r-normal--*-140-*-*-*-*-iso8859-1 aterm*boldFont:*-*-fixed-bold-r-normal--*-*-140-*-*-*-*-iso8859-1 I am getting transparency on my aterm windows, but the image that is coming through with the transparency isn't correct. On the left monitor things are fine, but the middle and right monitors both seem to use the leftmost 1920x1200 of the background image as what is behind the terminal window. It would be as if every screen had the same background as the monitor on the left. Is this something that can be configured to be correct, or is this a bug? I'm running Gentoo Linux with Xmonad.

    Read the article

  • Apple Mail doesn't apply rules unless I choose "Apply Rules" manually

    - by porneL
    I'm using Apple Mail with IMAP account. I have several filtering rules defined. The problem is that Mail doesn't apply them automatically to incoming email. Even spam isn't filtered automatically. For all incoming email, every time, I have to select e-mails and select "Apply Rules", and then rules work fine (that one time on selected e-mails only). It works like this on two separate installs of Mail with different accounts (both IMAP though). How can I get Mail to apply all rules automatically every time to all e-mails? I wonder does it ignore rules because of misconfiguation, bug or does Apple seriously expect people to use "Apply Rules" menu item regularly?

    Read the article

  • Format as NTFS without Journal

    - by palswim
    I have a flash drive that I'd like to format for use in Windows. I would like support for symbolic links, so I can't use FAT/FAT32/exFAT. I would prefer to use the ext4 filesystem and disable journaling, with the Ext2Fsd filesystem driver, but have (so far) found that I can't make soft links across filesystems that Windows will read, Ext2Fsd has an annoying bug about always mounting partitions as read-only and has problems resuming from sleep, and some programs have problems writing to the partition even after manually configuring Ext2Fsd to allow writes. So, I would like to use NTFS for the flash drive, but disable the journaling feature (causes extra writes), if possible. How can I do this?

    Read the article

  • Custom flash mp3 player stopping in the middle of playing audio on windows nt ie6 system

    - by Charlotte Moller
    We have used a custom MP3 flash player for a lot of years on our website without any issues, but recently, a client of ours is reporting that the audio is playing for several seconds and then stopping. When they refresh the page or click play in the player again the audio plays fine. We are puzzled as to what could be causing this issue after this running successfully for our clients for so many years. The client system is Windows NT running IE6. Does anyone have any idea what could cause the audio to behave this way? Could audio drivers or the version of flash cause problems? We do not have flash programmers on our team so we are not even sure where to start looking within the flash code of the player. Any ideas?

    Read the article

  • Double root folder vs single root folder

    - by Tomas
    On my Linux box, in bash, I have access to a "double root" folder denoted by two forward slashes: tomas:~ $ cd / tomas:/ $ ls bin/ cdrom@ ... tomas:/ $ cd // tomas:// $ ls bin/ cdrom@ ... The content of the folder and its subfolder is identical to the "normal" single slash root. The double slash does not go away when I access its subfolders. The annomaly does not repeat itself with three or more slashes; these are simple synonyms for the root: tomas:// $ cd home/tomas tomas://home/tomas $ cd /// tomas:/ $ cd //// tomas:/ $ What kindof place is it? Is it a bug? Can anyone explain the annomaly?

    Read the article

  • MySQL cluster: Error after 1 data node is shutdown and started again

    - by nitins
    MySQL cluster: Error after 1 data node is shutdown and started again. We have configured MySQL cluster(version 7.1) with 2 sql/data nodes. We are using table space instead of in-memory clustering. The setup was working fine. So to test the setup I shutdown one data node, updated a table and and again stated the stopped node. Its giving this error and not starting. Any ideas ? Forced node shutdown completed. Occured during startphase 5. Caused by error 2306: 'Pointer too large(Internal error, programming error or missing error message, please report a bug). Temporary error, restart node'.

    Read the article

  • why create "EventType clr20r3, P1 w3wp.exe" but don't have detail description of this unhandled exce

    - by Weixiao.Fan
    On the production server, I can see event from system Event Viewer when an asp.net app crash: *EventType clr20r3, P1 w3wp.exe, P2 6.0.3790.3959, P3 45d691cc, P4 app_web_default.aspx.cdcab7d2, P5 0.0.0.0, P6 4b2e4bf0, P7 4, P8 4, P9 system.dividebyzeroexception, P10 NIL.* it belongs to ".NET Runtime 2.0 Error Reporting" category. but I can't find a event which belongs to "ASP.NET 2.0.50727.0" which can give me this exception a detail view: *An unhandled exception occurred and the process was terminated. Application ID: /LM/W3SVC/505951206/Root Process ID: 1112 Exception: System.DivideByZeroException Message: Attempted to divide by zero. StackTrace: at _Default.Foo(Object state) at System.Threading.ExecutionContext.runTryCode(Object userData) at System.Runtime.CompilerServices.RuntimeHelpers.ExecuteCodeWithGuaranteedCleanup(TryCode code, CleanupCode backoutCode, Object userData) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Threading._ThreadPoolWaitCallback.PerformWaitCallbackInternal(_ThreadPoolWaitCallback tpWaitCallBack) at System.Threading.ThreadPoolWaitCallback.PerformWaitCallback(Object state) For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp I can find these two event on my dev machine, because of Visual Studio installing? If so, how can I disable this so I can emulate production environment? Great thanks and best regards, Fan

    Read the article

  • SNMP counter issues with cisco RV082

    - by Chance
    Does anyone else poll this router with SNMP? We are using firmware version: 2.0.0.19-tz We are having problems with the traffic counters, some of them appear to be implemented as 16 bit counter instead of 32 bit counters. The reason this is causing problems is that they roll over (at 65,000) to 0 in less than our minute polling cycle, really skewing our metrics. The counter for the Lan (interface 2) seems to be functioning properly, however interfaces 3 and 4 (WAN and DMZ / WAN2) rollover at 65000. Tue May 11 08:38:31 EDT 2010 IF-MIB::ifInOctets.1 = Counter32: 137634 IF-MIB::ifInOctets.2 = Counter32: 1865677943 IF-MIB::ifInOctets.3 = Counter32: 12450 IF-MIB::ifInOctets.4 = Counter32: 49354 Look at counter IF-MIB::ifInOctets.4 5 seconds later: Tue May 11 08:38:36 EDT 2010 IF-MIB::ifInOctets.1 = Counter32: 137634 IF-MIB::ifInOctets.2 = Counter32: 1865836207 IF-MIB::ifInOctets.3 = Counter32: 13167 IF-MIB::ifInOctets.4 = Counter32: 12900 Any suggestions? Seems like a bug to me, however I just wanted to make sure I wasn't crazy.. Thanks!

    Read the article

  • Force Colloquy not to use built-in Growl notifications

    - by thepurplepixel
    Whenever Colloquy needs to pop up a notification (for example, when you are PM'd), it uses its built-in Growl notifications, which really annoy me because they stay on the screen until they are clicked (at least NOTICE's do anyways). I'd like to make Colloquy use the Growl that I have installed on my Mac, not its built-in Growl notifications. That way, I could change its preferences from the Growl .prefpane and it would match the look of all my other notifications. I seem to remember this being possible (maybe in a bug report or something), but I can't remember how. Thanks!

    Read the article

  • Statistics based marketing campaign measurement tools

    - by AFHood
    Currently using SAS as measurement engine and Business Objects as display layer. Looking to develop a new, faster, slicker solution. Has anyone developed or purchased a campaign measurement reporting system? This solution should measure everything from email stats, web stats, customer activity, lift, ROI, etc. Ok.. I'm researching and finding nada... We are working with a team from India and they want to re-write everything from scratch.. Any solutions out there at all?

    Read the article

  • Why does installing NVidia 9600GT graphics card, take 1GB of RAM away from Windows?

    - by Nick G
    Hi, I've changed graphics cards in my PC and now Windows 7 (32bit) is reporting that I have a whole gigabyte less physical RAM in my PC. Why is this? Firstly, the machine has 4GB of physical RAM. The old card was an ATI 2600XT with 256MB and the new card is an NVidia 9600GT with 512MB. With the ATI card windows sees 3326MB. With the NVidia card, windows sees 2558MB. I realise that due to address space restrictions I will not see all 4GB with 32bit windows, but why is there such a massive loss of RAM when simply changing cards (bearing in mind BOTH cards have their own RAM and borrow no main memory like some built on chipsets do). Would using 64 bit windows solve this? Thanks Nick.

    Read the article

  • Wisdom of merging 100s of Oracle instances into one instance

    - by hoytster
    Our application runs on the web, is mostly an inquiry tool, does some transactions. We host the Oracle database. The app has always had a different instance of Oracle for each customer. A customer is a company which pays us to provide our service to the company's employees, typically 10,000-25,000 employees per customer. We do a major release every few years, and migrating to that new release is challenging: we might have a team at the customer site for a couple weeks, explaining new functionality and setting up the driving data to suit that customer. We're considering going multi-client, putting all our customers into a single shared Oracle 11g instance on a big honkin' Windows Server 2008 server -- in order to reduce costs. I'm wondering if that's advisable. There are some advantages to having separate instances for each customer. Tell me if these are bogus, please. In my rough guess about decreasing importance: Our customers MyCorp and YourCo can be migrated separately when breaking changes are made to the schema. (With multi-client, we'd be migrating 300+ customers overnight!?!) MyCorp's data can be easily backed up and (!!!) restored, without affecting other customers. MyCorp's data is securely separated from their competitor YourCo's data, without depending on developers to get the code right and/or DBAs getting the configuration right. Performance is better because the database is smaller (5,000 vs 2,000,000 rows in ~50 tables). If MyCorp's offices are (mostly) in just one region, then the MyCorp's instance can be geographically co-located there, so network lag doesn't hurt performance. We can provide better service to global clients, for the same reason. In MyCorp wants to take their database in-house, then we can easily export their instance, to get MyCorp their data. Load-balancing is easier because instances can be placed on different servers (this is with a web farm). When a DEV or QA instance is needed, it's easier to clone the real instance and anonymize the data, because there's much less data. Because they're small enough, developers can have their own instance running locally, so they can work on code while waiting at the airport and while in-flight, without fighting VPN hassles. Q1: What are other advantages of separate instances? We are contemplating changing the database schema and merging all of our customers into one Oracle instance, running on one hefty server. Here are advantages of the multi-client instance approach, most important first (my WAG). Please snipe if these are bogus: Less work for the DBAs, since they only need to maintain one instance instead of hundreds. Less DBA work translates to cheaper, our main motive for this change. With just one instance, the DBAs can do a better job of optimizing performance. They'll have time to add appropriate indexes and review our SQL. It will be easier for developers to debug & enhance the application, because there is only one schema and one app (there might be dozens of schema versions if there are hundreds of instances, with a different version of the app for each version of the schema). This reduces costs too. The alternative is having to start every debug session with (1) What version is this customer running and (2) Let's struggle to recreate the corresponding development environment, code and database. (We need a Virtual Machine that includes the code AND database instance for each patch and release!) Licensing Oracle is cheaper because it's priced per server irrespective of heft (or something -- I don't know anything about the subject). The database becomes a viable persistent store for web session data, because there is just one instance. Some database operations are easier with one multi-client instance, like finding a participant when they're hazy about which customer they (or their spouse, maybe) works for: all the names are in one table. Reporting across customers is straightforward. Q2: What are other advantages of having multiple clients in one instance? Q3: Which approach do you think is better (why)? Instance per customer, or all customers in one instance? I'm concerned that having one multi-client instance makes migration near-impossible, and that's a deal killer... ... unless there is a compromise solution like having two multi-client instances, the old and the new. In that case case, we would design cross-instance solutions for finding participants, reporting, etc. so customers could go from one multi-client instance to the next without anything breaking. THANKS SO MUCH for your collective advice! This issue is beyond me -- but not beyond the collective you. :) Hoytster

    Read the article

  • OSX: iTunes won't play .ogg files over shared library

    - by Decavolt
    I have an OSX 10.5 box that is sharing its iTunes library and another 10.6 box connecting to that library through iTunes. It works as expected except for ogg files. Those ogg files are visible and selectable in the shared library but will not play through the share on the client machine. If I move the ogg files to the client machine they play locally just fine. They also play locally on the host machine. They just won't play through the iTunes share. Is this a bug or some other known issue? Is there a fix, short of converting all ogg files to mp3?

    Read the article

  • Ubuntu 13.10: nslookup not automatically appending DNS suffixes

    - by Alex
    When I configure Ubuntu 13.10 server I ran into a problem: Usally (working on 12.10 machines) I add the following information in my /etc/resolv.conf file: nameserver 192.168.2.180 domain our.domain.com Normally, when I then ping a given host , .e.g: ping host01 It would resolve the FQDN to host01.our.domain.com However in ubuntu 13.10 this doesn't seem to be working, it just returns the following: ~# nslookup host01 Server: 192.168.2.180 Address: 192.168.2.180#53 ** server can't find host01: SERVFAIL Which is normaly since the DNS server doesnt respont to a 'host01' request. However if I do the same nslookup on an Ubuntu 12.10 machine it automatically appends the 'our.domain.com' suffix to whatever I throw at it that doens't already have this suffix. Is this a 13.10 bug, or am I doing something wrong?

    Read the article

  • Relying on nhibernate's second level cache vs pushing objects into asp.net session

    - by AhmetC
    I have some big entities which are frequently accessed in the same session. For example, in my application there is a reporting page which consist of dynamically generated chart images. For each chart image on this page, the client makes requests to corresponding controller and the controller generates images using some entities. I can either use asp.net's session dictionary for "caching" those entities or rely on nhibernate's second level cache support with using cached queries for example. What is your opinion? By the way I will use shared hosting, is nhibernate's second level cache hosting friendly? Thanks.

    Read the article

  • Outlook 2007 - Monospace/preformatted quickstyle

    - by joelarson
    In Outlook 2003 I could set a block of text as preformatted, which was mighty handing for emailing code snippets. This prevented line wrapping, and specified that a monospace font be used in whatever email client recieved my email. Now in Outlook 2007, there are at least three problems: 'Plain Text' style (apparently the closest thing to 'preformatted' as a style) is buried beneath about five clicks, and adding it to the quick style gallery and saving the gallery seems to be forgotten (bug?) It uses some fancy Microsoft font 'Consolas' that is not on some of my coworkers machines that use thunderbird It does not appear that there is any way to get it flagged as a monospace font so that other email clients will display it properly. Does anybody know how to quickly set text to 'preformatted' as Outlook 2003, meaning A) use a monospace font B) flag it as monospace for other clients C) avoid hard line breaks?

    Read the article

  • Can't launch Oneiric x64 instance on Eucalyptus

    - by Bruno Reis
    EDIT: after many hours, I've found out that the problem has nothing to do with Eucalyptus. It looks like the image is buggy. Very, very buggy. More details in the end. I didn't manage to fix it, and I will file a bug. EDIT 2: I managed to fix it, it apparently works. I have a 4-machine cluster running Ubuntu Server Natty (11.04) x64. I've installed "Ubuntu Enterprise Cloud" from the installtion CD (then updated it) on each of these machines. The cloud seems to work fine, I have lots of virtual machines running Natty servers on them. Now I'd like to run Oneiric in a virtual machine, but somehow I can't. I downloaded Oneiric's (x64) image from http://cloud-images.ubuntu.com/oneiric/current/, published it (uec-publish-tarball oneiric-server-cloudimg-amd64.tar.gz oneiric-server-cloudimg-amd64) exactly as I did with Natty, then tried to launch an instance (euca-run-instances -n 1 -k my-key -t m1.small -z my-cloud emi-XXXXXXXX) using Oneiric's image, but the instance is not able to boot. With euca-get-console-output I get the following: [ 0.461269] VFS: Cannot open root device "sda1" or unknown-block(0,0) [ 0.462388] Please append a correct "root=" boot option; here are the available partitions: [ 0.463855] Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(0,0) [ 0.465331] Pid: 1, comm: swapper Not tainted 3.0.0-13-generic #22-Ubuntu [ 0.466526] Call Trace: [ 0.466989] [<ffffffff815d3ee5>] panic+0x91/0x194 [ 0.467860] [<ffffffff81ad1031>] mount_block_root+0xdc/0x18e [ 0.468891] [<ffffffff81ad126a>] mount_root+0x54/0x59 [ 0.469829] [<ffffffff81ad13dc>] prepare_namespace+0x16d/0x1a7 [ 0.470883] [<ffffffff81ad0d76>] kernel_init+0x140/0x145 [ 0.471837] [<ffffffff815f38e4>] kernel_thread_helper+0x4/0x10 [ 0.472889] [<ffffffff81ad0c36>] ? start_kernel+0x3df/0x3df [ 0.473884] [<ffffffff815f38e0>] ? gs_change+0x13/0x13 The filesystem is labeled "cloudimg-rootfs", inside the image both /etc/fstab and /boot/grub/grub.cfg always refer to the image by the label, everything seems to be correct, yet the kernel says it can't find the root file system. I've spent many hours googling, but nothing came out. I've asked on #ubuntu-server, but nobody knew what to do. I've asked on #eucalyptus but got no answer at all. Any ideas on why this is happening and how to solve it? Thanks EDIT: after many hours, I've found out that the problem has nothing to do with Eucalyptus. It looks like the image is buggy. Very, very buggy. The first problem is that the Kernel in the image is a -generic kernel, while I suppose it should be a -virtual one. I chrooted into the image, removed the -generic packages, replaced it with the -virtual ones. Then I extracted the new kernel (and replaced the original one (-generic) that came with the tarball) because I need it when I publish and launch an image with Eucalyptus. The problem described above was solved. But then, the console started showing this: mount: mount point ext4 does not exist If you check the /etc/fstab file in the image, it says: LABEL=cloudimg-rootfs ext4 defaults 0 1 Damnt, where's my mount point? Note that it is missing /proc as well. Well, when you think it is over, you will notice that your instance will have no network connectivity. Let's check /etc/network/interface: # interfaces(5) file used by ifup(8) and ifdown(8) auto lo iface lo inet loopback Oh my! It is missing eth0... here I stopped. I can't take no more. I give up. Looks like Canonical has just forgotten to properly set up this image. At first, I though: "have I downloaded a server image by mistake?", but no, I double checked. It is really the cloud image, it has even "cloud-init" installed (which is not, by default, on server images). They just forgot to prepare it. I will file a bug (and reference it here once this is done), and hope they fix it soon! EDIT 2: it looks like the network configuration was the last thing missing. I decided to test it with the fixes above, and it booted properly! However, I haven't got the slightest idea if the image is now good to go...

    Read the article

  • VS2010 profiler/leak detection

    - by Noah Roberts
    Anyone know of a profiler and leak detector that will work with VS2010 code? Preferably one that runs on Win7. I've searched here and in google. I've found one leak detector that works (Memory Validator) but I'm not too impressed. For one thing it shows a bunch of menu leaks and stuff which I'm fairly confident are not real. I also tried GlowCode but it's JUST a profiler and refuses to install on win7. I used to use AQtime. It had everything I needed, memory/resource leak detection, profiling various things, static analysis, etc. Unfortunately it gives bogus results now. My main immediate issue is that VS2010 is saying there are leaks in a program that had none in VS2005. I'm almost certain it's false positives but I can't seem to find a good tool to verify this. Memory Validator doesn't show the same ones and the reporting of leaks from VS doesn't seem rational.

    Read the article

  • How does Amarok rip Audio CDs (in Ubuntu Lucid)?

    - by Hanno Fietz
    I'm in the process of moving my CD collection into my Amarok library. Mostly, it works great. Sometimes however, the process just hangs forever. The problem seems to occur at random (i. e. often, but not always at the same disk/track) and the consequences range from none (successful after cancel/retry) to Amarok's internal db becoming completely messed up. I would like to investigate and file a proper bug report or find a fix / workaround, but I don't understand how Amarok does the ripping. When all is working, there's a lame process encoding to a temporary file, which appears in my collection once it's finished. When the process hangs, that lame command is still there, but waiting forever for data on stdin, which seems to come from a third process. That seems to be kio_audiocd, but I don't know whether that's correct and what it's supposed to do. What's going on?

    Read the article

  • Certain users cannot get to my server

    - by Zeno
    I am finding more and more users that report they cannot reach my server (website or services). Their tracert from that user looks like this: Tracing route to domain.com [*.*.*.255] over a maximum of 30 hops: 1 * * * Request timed out. The server is up and functional and every else reports it is fine. But there are various users who cannot get to it. I have no firewall or anything that would block anyone. Yes, the last part of the server IP is 255. Could this be causing it? http://www.dslreports.com/forum/r18539206-Last-octet-255-bug-on-Windows Or would a certain ISP be denying traffic to my server? Or something on their router level?

    Read the article

  • Corporate Wiki Organization - Technical Documentation

    - by Dave Jarvis
    Corporations have documents describing various aspects of their technical systems, including: Custom Applications Custom Development Frameworks Third Party Applications Accounting Bug Tracking Network Management How To Guides User Manuals Software Tools Web Browsers Development IDEs Graphics GIMP xv Text Editing File Transfer ncFTP WinSCP Hardware Servers Web Database Exchange File Network Devices Printers If you had to use a Wiki to manage the documentation, what other items would you add to the list, and how would you organize it? (For example, would Software Tools make more sense under Third Party Applications?) A few constraints: The structure should not go beyond three levels deep. Avoid the word "and" in favour of two different categories. Keep the structure general: it should appy as broadly as possible. Target audience is primarily technical, but could be visible by anyone.

    Read the article

< Previous Page | 200 201 202 203 204 205 206 207 208 209 210 211  | Next Page >