Search Results

Search found 3973 results on 159 pages for 'boost filesystem'.

Page 140/159 | < Previous Page | 136 137 138 139 140 141 142 143 144 145 146 147  | Next Page >

  • Why can I not edit, delete directories inside of this directory

    - by user43053
    Hello there, First, I thought this was PHP related, but maybe it isn't. My original post, which may be irrelevant now is located at the bottom. The problem is I have a directory : /articles/. In it are 10 sub directories. I have been changing the permissions lately, but now it seems all the permissions of the parent folder, sub-folders and files are either chmod 755 or 777. I cannot move, delete or edit files inside of this parent directory or sub-directories with my FTP-client. I can however edit, delete, create new files and directories and change them with PHP-functions without problems. What may the problem be? OLD POST. Ignore everything below this line: If I create a directory with mkdir(), or create a file with fopen(), file_put_contents() or SimpleXMLElement::asXML(), I am unable to access the file with my FTP-client or c-Panel File Manager. If I try to delete or edit them, I get errors. Dreamweaver suggests it is a permission problem or a network or filesystem fault (but I've set the permissions with chmod() to 0777, and when I check the cPanel, it confirms chmod 777. I also tried to use fileowner() and the function returns int(99), the same owner as those files that I could access with my FTP-client. It seems files and directories created with PHP can only be modified or be deleted with PHP. I thought this must be a server setup related issue, so I write it here. I am on a shared server, and I have no idea about setting up servers. EDIT: It seems the problem is different. I cannot move files with FTP-client to the parent, or sub-directories either. This problem may not be PHP related, then. It seems the problem applies to any directory, regardless of whether it was created by PHP. EDIT 2: The parent directory has chmod 755. Thank you for your time. Kind regards Marius

    Read the article

  • MicroSD card getting corrupted for no good reason

    - by ChaosR
    I recently bought an MicroSD card online. It's a Sandisk 16GB class 2. However, it has a nasty problem. Every time I fill it with my data, the fat tables get corrupted. I've tried reformatting it, blanking it, doesn't seem to solve the problem. I have tried windows and linux (ubuntu), both have the problem. I've used my usb microsd readers, and even tried putting it in my phone and putting data on it from there. All have this problem. Now the really odd thing is, besides the corrupted file tables, no programs can find anything wrong with the hardware. I've tried both chkdisk and "badblocks -w", neither give any type of error. Now I don't know if the actual data gets corrupted, or if its just filesystem tables. What happens is that one or more folders start showing a load of chinese-charred (random UTF8 symbols I suppose) folders and files, and it is impossible to do anything with those. All the other data (outside of the corrupted folders) seems fine. I've tried to test it, and the problem doesn't seem to show up until I fill the disk upto about 3~4GB. After that I can still access the data. But as soon as I eject/safely remove/unmount it, the bad things happen somehow. Next time I plug it in, the folders I most recently wrote to (but sometimes also the folders I wrote the time before last time to) are all gibberish. Does anybody have any clue what might be going on here?

    Read the article

  • How to backup Servers to an SSH-Host with low traffic and access to versions and encryption?

    - by leto
    Hello, I've not run backups for the past dont't remember anymore years for my personal stuff until waking up lately and realising contrary to my prior belief: Actually. I care! :) Now I have a central data server at home where I want to attach an external media to, to which I want to save backups of my most important stuff, like years of self-written scripts, database dumps, you name it. I've tinkered with rsync+ssh over the last two years, also tried tar over ssh, but don't know the simplest and most easy to maintain way to do it yet. Heres my workload: A typical LAMP-Server (<5GB Data) which I'd like to backup fully so lots of small files connected via 10Mbit My personal stuff (<750GB Data) from a Mac connected via GE My passwords in an encrypted container (100Mb) from OpenBSD connected via serial-PPP My E-Mail from the last ten years (<25GB) as Maildir which I need to keep in readable format Some archives (tar.*) which I need to backup only once and keep in readable format (Deleted my ideas, as I'm here for suggestions) What I need: 1. Use an ssh-tunnel for data transfer 2. Be quick with lots of small files 3. Keep revisions 4. Be sure the data I save is not corrupted 5. Intelligent resume functions and be able to deal with network congestion :) 6. Compressed and optionally encrypted storage 7. Be able to extract data from backup easily (filesystem like usage would be nice) How would and with what software would you backup this stuff? Hints to tools that can help solve only part of my problem (like encryption) also greatly appreciated. Greets

    Read the article

  • Linux that restores itself on each reboot

    - by jettero
    I'm looking for methods and software to help create a variant of lubuntu that will restore itself to an install state and/or update on every boot. I'm thinking of doing things like putting the root filesystem on a squashfs and using unionfs and tmpfs to make root writable, but automagically restorable. I'm thinking of updating the squashfs with rsync. Perhaps there are other ways to approach the problem. Perhaps root needn't be writable at all. All thoughts welcome. The home dir would be writable in the usual way. The goal, if it matters, is a Linux that's simple to maintain from the home office, but that functions correctly for customers. We have some custom software that we wish for customers to be able to run trivially on equipment we provide. Ideally these devices would have a "restore to factory" function that would put it back the way we intended. If this is part of the normal boot cycle, so much the better. Why lubuntu? Personal preference for this application. It has a usable desktop, but doesn't take up much ram.

    Read the article

  • The new direction of the gaming industry

    - by raccoon_tim
    Just recently I read a great blog post by David Darling, the founder of Codemasters: http://www.develop-online.net/blog/347/Jurassic-consoles-could-become-extinct. In the blog post he talks about how traditional retail games are experiencing a downfall thanks to the increasing popularity of digital distribution. I personally think of retail games as being relics of the past. It does not really make much sense to still keep distributing boxed games when the same game can be elegantly downloaded and updated over the air through a digital distribution channel. The world is not all rainbows, however. One big issue with mixing digital distribution with boxed retail games is that resellers will not condone you selling your game for 10€ digitally while their selling the same game for 70€. The only way to get around this issue is to move to full digital distribution. This has the added benefit of minimizing piracy as the game can be tightly bound to the service you downloaded the game from. Many players are, however, complaining about not being able to play the games offline. Having games tightly bound to the internet is a problem when games are bought from a retailer as we tend to expect that once we have the product we can use it anywhere because we physically own it. The truth is that we don’t actually own the product. Instead, the typical EULA actually states that we only have a license to use the product. We’re not, for instance, allowed to disassemble the product, which the owner is indeed permitted to do. Digital distribution allows us to provide games as services, instead of selling them as standalone products. This means that for a service to work you have to be connected to the internet but you still have the same rights to use the product. It’s really straightforward; if you downloaded a client from the internet you are expected to have an internet connection so you’re able to connect to the server. A game distributed digitally that is built using a client-server architecture has the added benefit of allowing you to play anywhere as long as you have the client installed and you are able to log in with your user information. Your save games can be backed up and your game can continue anywhere. Another development we’re seeing in the gaming industry is the increasing popularity of free-to-play games. These are games that let you play for free but allow you to boost your gaming experience with real world money. The nature of these games is that players are constantly rewarded with new content and the game can evolve according to their way of playing and their wishes can be incorporated into the product. Free-to-play games can quickly gain a large player basis and monetization is done by providing players valuable things to buy making their gaming experience more fun. I am personally very excited about free-to-play games as it’s possible to start building the game together with your players and there is no need to work on the game for 5 years from start to finish and only then see if it’s actually something the players like. This is a typical problem with big movie-like retail games and recent news about Radical Entertainment practically closing its doors paints a clear picture of what can happen when the risk does not pay off: http://news.teamxbox.com/xbox/25874/Prototype-Developer-Radical-Entertainment-Closes/.

    Read the article

  • Optimize Apache performance

    - by Phliplip
    I'm looking for ways to optimize our current web server hosted in-house. I'm trying to supply as much relevant information below. Please let me know if you would require additional information in order to assist. Server is running 1 single website, which is an online pizza ordering platform built on Zend Framework (ver1). On traffic stats from the last month aprox 6.000 pageloads per day, concentrated mainly around dinnertime. Around 1500 loads/hour peaks in that period. We recently upgraded from a 2/2mbit aDSL-line to 100/100mbit fiber, and we still have performance issues at dinner time. We assumed the 2mbit was the issue. Website is pretty snappy in low-load periods. Hardware CPU: Intel(R) Xeon(R) CPU 5160 @ 3.00GHz (3000.13-MHz K8-class CPU) Mem: 328M Active, 4427M Inact, 891M Wired, 244M Cache, 623M Buf, 33M Free Swap: 16G Total, 468K Used, 16G Free (6GB physical, 16GB swap) Filesystem Type Size Used Avail Capacity Mounted on /dev/ad7s1a ufs 4.8G 768M 3.7G 17% / devfs devfs 1.0K 1.0K 0B 100% /dev /dev/ad7s1g ufs 176G 5.2G 157G 3% /home /dev/ad7s1e ufs 4.8G 2.8M 4.5G 0% /tmp /dev/ad7s1f ufs 19G 3.5G 14G 19% /usr /dev/ad7s1d ufs 4.8G 550M 3.9G 12% /var Server OS FreeBSD 8.2-RELEASE Software apache-2.2.17 php5-5.3.8 mysql-server-5.5 Apache footprint (example, taken from # top) 31140 www 1 45 0 377M 41588K lockf 2 0:00 0.00% httpd 31122 www 1 44 0 375M 35416K lockf 2 0:00 0.00% httpd 31109 www 1 44 0 375M 38188K lockf 2 0:00 0.00% httpd 31113 www 1 44 0 375M 35188K lockf 2 0:00 0.00% httpd Apache is using the prefork MPM, APC (Alternative PHP Cache). SSL module is loaded, but not utilized (as in don't really work, thus not used). There is a file containing settings for MPM modules, but as i see it's not included in the httpd.conf file, the include line is commented out. Thus i would guess that the prefork MPM is working of default values too. Here are some other Apache conf values that i found - which are included in https.conf Timeout 300 KeepAlive On MaxKeepAliveRequests 100 KeepAliveTimeout 5 UseCanonicalName Off HostnameLookups Off

    Read the article

  • Subversion: Secure connection truncated

    - by Nick
    Hi, I'm trying to set-up a subversion server with apache2/webdav access. I've created the repository and configure Apache according to the official book, and I can see the repository in a webbrowser. The browser shows: conf/ db/ hooks/ locks/ Although clicking any of those links gives an empty xml document like: <D:error> <C:error/> <m:human-readable errcode="2"> Could not open the requested SVN filesystem </m:human-readable> </D:error> I've never used subversion before so I assume this is correct? Anyway, when I try to connect via a command line client, it asks for my password, I give it, then I get the (useless) error message: svn: OPTIONS of 'https://svn.mysite.com': Could not read status line: Secure connection truncated (https://svn.mysite.com) The command I'm using is: svn checkout https://svn.mysite.com/ svn.mysite.com Subversion was installed using Ubuntu's package manager. It's version 1.6.6 on Ubuntu 10.04. My Virtualhost Cofiguration: <VirtualHost 123.123.12.12:443> ServerAdmin [email protected] ServerName svn.mysite.com <Location /> DAV svn SVNParentPath /var/svn/repos SVNListParentPath On AuthType Basic AuthName "Subversion Repository" AuthUserFile /etc/subversion/passwd Require valid-user </Location> # Setup The SSL Certificate Paths SSLEngine On SSLCertificateFile /etc/ssl/certs/mysite.com.crt SSLCertificateKeyFile /etc/ssl/private/dmysite.com.key </VirtualHost>

    Read the article

  • Windows 7: moved system partition, need to update boot partition

    - by Actorclavilis
    So, I have a decently standard Windows7/Ubuntu dual-boot setup, and (since Ubuntu is my usual operating system) I found I needed to grow my Ubuntu partition and shrink my W7 partition. Originally, my system (500G) looked like this: W7 Boot Partition (1.5G) Ubuntu (around 240G) W7 (same as Ubuntu) (on an extended partition, all by itself) Swap (rest of disk, around 16G) Now I'm no stranger to partitioning and filesystem tools, especially GParted, which I used on a Linux boot disk. After my partition editing, the partitions are laid out the same, except the Ubuntu partition is now 407G and the W7 partition is smaller to compensate. I had supposed, based on http://www.gparted.org/faq.php, that I would be able to run the W7 install disk in recovery mode and have it deal with the rearrangement, then possibly reinstall GRUB or something. Well, now the W7 install disk doesn't even see my W7 installation. All my files are there, the NTFS is perfectly clean, no problems there, but the install disk won't notice it. (Of course, the GRUB entry works fine but the W7 boot partition (which I didn't change) refuses to boot it.) So, basically, any ideas on how to fix this? I don't especially want to rerun the entire install procedure because I'll have a bunch of programs to reinstall (never mind redoing GRUB), but I fear that might be the only option. Thanks.

    Read the article

  • where is my disk space?

    - by user166241
    I recently had a problem with .xsession-errors file - it became very big ( 90GB) and took all disk space: How I can check what takes disk space in /tmp?. I cleaned it with command > .xsession-errors but after an hour it became large again. So I deleted it (rm .xsession-errors) - it helped because it wasn't recreated but again after hour disk space disappeared - now there is no .xsession-errors anymore but I don't know where is the memory: df Filesystem 1K-blocks Used Available Use% Mounted on /dev/sda1 106640456 101223392 4 100% / udev 8166744 8 8166736 1% /dev tmpfs 3270224 972 3269252 1% /run none 5120 0 5120 0% /run/lock none 8175552 152 8175400 1% /run/shm du -sc * .[^.]* | sort -n 0 initrd.img 0 initrd.img.old 0 proc 0 sys 0 vmlinuz 0 vmlinuz.old 4 cdrom 4 lib64 4 media 4 mnt 4 selinux 8 dev 12 srv 16 lost+found 68 tmp 1124 run 3396 lib32 5164 .rpmdb 5540 root 8888 sbin 9120 bin 17132 etc 106080 opt 116956 boot 861908 lib 3530584 usr 3821836 var 13371260 home 21859112 total So there is around 100GB used but executing du -sc * .[^.]* | sort -n in root directory finds only ~21 GB - so what takes 80GB?? How to check it? I suspect that when I deleted the `.xsession-errors' file the errors were redirected somwhere else - but where?

    Read the article

  • FTP timeout but SSH is working?

    - by nmarti
    I have a problem in my server, when I try to connect via FTP to a domain, the connexion is VERY slow, and I get timeouts just listing files in a directory. When I try to connect to the domain folder using the root user account via SSH, it works fine, and I can download the files without problem. What can be wrong? I tried to reboot the server, also the office router, and nothing... It is a fedora core 7 server with proftpd. Can it be a filesystem problem? Thanks. CONNECTION LOG: Cmd: MLST about.php 250: Start of list for about.php modify=20120910092528;perm=adfrw;size=2197;type=file;UNIX.group=505;UNIX.mode=0644;UNIX.owner=10089; about.php End of list Cmd: PASV 227: Entering Passive Mode (***hidden***). Data connection timed out. Falling back to PORT instead of PASV mode. Connection falling back to port (PORT) mode. Cmd: PORT ***hidden*** 200: PORT command successful Cmd: RETR about.php Could not accept a data connection: Operation timed out.

    Read the article

  • How to chain GRUB2 for Ubuntu 10.04 from Truecrypt & its bootloader (multi boot alongside Windows XP partition)?

    - by Rob
    I want Truecrypt to ask for password for Windows XP as usual but with the standard [ESC] option, on selecting that, i.e via Escape key, I want it to find the grub for the (unencrypted) Ubuntu install. I've installed Windows XP on the 120Gb hard drive of a Toshiba NB100 netbook then partitioned to make room for Ubuntu 10.04 and installed that after the Windows XP install. When I encrypt Windows XP, Truecrypt will overwrite the grub entry in the master boot record (MBR), I believe (?) and I won't be able to choose between XP and Ubuntu anymore. So I need to restore it back. I've searched fairly extensively for answers on Ubuntu forums and elsewhere but have not yet found a complete answer that covers all eventualities, scenarios and error messages, or otherwise they talk of legacy GRUB and not GRUB2. Ubuntu 10.04 uses GRUB2. My setup: Partitions: Windows XP, NTFS (to be encrypted with Truecrypt), 40Gb /boot (Ext4, 1Gb) Ubuntu swap, 4Gb Ubuntu / (root) - main filesystem (20gb) NTFS share, 55Gb I know that the Truecrypt boot loader replaces the GRUB when boot up because I've already tried it on another laptop. I want boot loader screen to look something like the usual: Truecrypt Enter password: (or [ESC] to skip) password is for WindowsXP and on pressing [ESC] for it to find the Ubuntu grub to boot from Thanks in advance for your help. The key area of the problem is how to instruct Truecrypt when escape key is pressed, and how the Grub/Ubuntu can be made visible to the truecrypt bootloader to find it, when the esc key is pressed. Also knowing as chaining.

    Read the article

  • ZFS Data Loss Scenarios

    - by Obtuse
    I'm looking toward building a largish ZFS Pool (150TB+), and I'd like to hear people experiences about data loss scenarios due to failed hardware, in particular, distinguishing between instances where just some data is lost vs. the whole filesystem (of if there even is such a distinction in ZFS). For example: let's say a vdev is lost due to a failure like an external drive enclosure losing power, or a controller card failing. From what I've read the pool should go into a faulted mode, but if the vdev is returned the pool should recover? or not? or if the vdev is partially damaged, does one lose the whole pool, some files, etc.? What happens if a ZIL device fails? Or just one of several ZILs? Truly any and all anecdotes or hypothetical scenarios backed by deep technical knowledge are appreciated! Thanks! Update: We're doing this on the cheap since we are a small business (9 people or so) but we generate a fair amount of imaging data. The data is mostly smallish files, by my count about 500k files per TB. The data is important but not uber-critical. We are planning to use the ZFS pool to mirror 48TB "live" data array (in use for 3 years or so), and use the the rest of the storage for 'archived' data. The pool will be shared using NFS. The rack is supposedly on a building backup generator line, and we have two APC UPSes capable of powering the rack at full load for 5 mins or so.

    Read the article

  • building a debian base image

    - by Michael
    Is there a preferred way to create base images for Debian-based customized installations? We are currently going with multistrap but although it's better than hand-crafted chroot stuff, it still has a lot of edges and corners. Is there a more reliable and less error-prone way to produce a root filesystem of a Debian installation with some additional .debs installed? (I don't want to send out a Debian installer with a preseed file though.) Addendum 1: To clarify things a bit: We are delivering some kind of software appliance to our customers. That is, a debian operating system, with some additional software packages -- both our own and third-party ones -- and some configuration changes. To ease the installation process, we have an installer that does nothing more than partitioning, copying files to the partitions and setting up grub. So it's basically an image-based installer. So we are basically running the debian installation ourselves and just distribute the already installed operating system. The question is about the installation part. I want to have that as easy and robust as possible, and of course, it should be an automated process.

    Read the article

  • Get more space for redis in unix server

    - by DevTraveler
    In our app we have only windows servers beside the cases we use redis queues. This case , we use unix server created by amazon. As you can see we do not have a lot of space available and we want to make sure redis has enough space to work without getting stuck. I am little bit new in unix and after reading some data about the unix file system i still was not fully sure how can i give the redis drive (it is in the home directory) more space. I see the mnt that has a lot of space but read it is temporary for cd-rom and network drives. Can you help me figure out how to get more space to my redis ? If it is possible i prefer not to re-install the redis server. Filesystem Size Used Avail Use% Mounted on /dev/xvda1 7.9G 6.7G 880M 89% / udev 7.4G 4.0K 7.4G 1% /dev tmpfs 3.0G 152K 3.0G 1% /run none 5.0M 0 5.0M 0% /run/lock none 7.4G 0 7.4G 0% /run/shm /dev/xvdb 414G 30G 364G 8% /mnt Thanks.

    Read the article

  • Linux Has Become Very Slow Dealing With Large Data

    - by Kohjah Breese
    Last year I bought a computer, for around $1,800, so it is relatively high-end. When I first got it I was particularly pleased at how quick it dealt with large MySQL queries, imports and exports. But somewhere along the way something has gone wrong and I am not sure how to diagnose the problem. Any job that involves processing large amounts of data, e.g. gzipping file c. 1GB+, UPDATEs on large MySQL tables etc. have become very slow. I just performed an intensive alter statement on a 240,000,000 row table on a remote server, which is lower spec. This took about 10 minutes. However, performing the same query on a 167,000,000 row table on my computer went fine until it hit 860MB. Now it is only writing about 1MB every 15 seconds. Does anyone have any advice as to debugging what the issue is? I am using LinuxMint (based on Ubuntu 12.04.) The home partition is encrypted, which really slows down gzip. I have noticed the swap is barely used, but am not sure if that is because there is more than enough RAM. The filesystem is ext4. The MySQL server is on a separate hard drive, but it was fine when I first installed it. Other than the above issues, there are no other problems with it. I am going to install a fresh Ubuntu on the 4th hard drive to see if that is any different.

    Read the article

  • How do you get linux to honor setuid directories?

    - by Takigama
    Some time ago while in a conversation in IRC, one user in a channel I was in suggested someone setuid a directory in order for it to inherit the userid on files to solve a problem someone else was having. At the time I spoke up and said "linux doesn't support setuid directories". After that, the person giving the advice showed me a pastebin (http://codepad.org/4In62f13) of his system honouring the setuid permission set on a directory. Just to explain, when i say "linux doesnt support setuid directories" what I mean is that you can go "chmod u+s directory" and it will set the bit on the directory. However, linux (as i understood it) ignores this bit (on directories). Try as I might, I just cant quite replicate that pastebin. Someone suggested to me once that it might be possible to emulate the behaviour with selinux - and playing around with rules, its possible to force a uid on a file, but not from a setuid directory permission (that I can see). Reading around on the internet has been fairly uninformative - most places claim "no, setuid on directories does not work with linux" with the occasional "it can be done under specific circumstances" (such as this: http://arstechnica.com/etc/linux/2003/linux.ars-12032003.html) I dont remember who the original person was, but the original system was a debian 6 system, and the filesystem it was running was xfs mounted with "default,acl". I've tried replicating that, but no luck so far (tried so far with various versions of debian, ubuntu, fedora and centos) Can anyone clue me in on what or how you get a system to honor setuid on a directory?

    Read the article

  • How can I undo what I did when I accidentally booted linux host inside itself with VMware?

    - by ThomasGHenry
    Hello, I'm dual booting XP and Kubuntu. I wanted to boot to my existing raw scsi XP partition inside Kubuntu, not a virtual XP instance. I accidentally booted Kubuntu inside itself. I know this is a big mistake, so I interrupted the VM, which saved the state and closed. I rebooted the host and now I can't load the Kubuntu partition at boot time. I get a maintenance shell and the Kubuntu partition is read-only. I am able to boot XP as usual. I removed the HDD and tried to mount it on another computer as an external drive and neither partition (XP or Kubuntu) will be recognized, it just appears to be one device that still mounts and appears empty. From the maintenance shell I can see all the files are still on the Kubuntu partition. How can I undo what I did when I accidentally booted Kubuntu inside itself? Is it a matter of unlocking some files somewhere? how can I do that on a RO filesystem? Thanks!

    Read the article

  • SSD won't boot anymore

    - by LordrAider
    Yesterday I put the computer to sleep. Something went wrong because it didn't go fully to sleep. So I restarted the pc and now it won't boot windows 7 anymore. It said : "Please insert valid boot device". I ran Windows 7 restore disc and tried restoring, first it said, mbr fixed. No result but now it said : "Operating system could not be loaded" I ran Windows 7 restore disc again and then it said something about a corrupt partition and that he fixed it. But got the same msg at restart about operating system not found. I ran Windows 7 restore disc again and used diskpart and watched the volumes. My SSD shows up as RAW filesystem... not as NTFS. The size of the disk seems correct. In the bios it also shows up as Healthy disk. What could went wrong and could I recover data with testdisk? I assume something went wrong with the partition :(. It's a Plextor SSD 256M2P SSD, only 3 months old. Thx in advance

    Read the article

  • file system damage

    - by jffrs
    I try recover the backup superblock on /dev/sda2 that contain ubuntu 12.04 LTS and partition ext4 with livecd ubuntu 10.04. the message is below root@ubuntu:/home/ubuntu# fsck.ext4 -b 163840 -B 4096 /dev/sda2 e2fsck 1.41.11 (14-Mar-2010) /dev/sda2 was not cleanly unmounted, check forced. Resize inode not valid. Recreate? yes Pass 1: Checking inodes, blocks, and sizes Programming error? block #7963637 claimed for no reason in process_bad_block. Programming error? block #11240437 claimed for no reason in process_bad_block. Root inode is not a directory. Clear? yes Inode 712 is in extent format, but superblock is missing EXTENTS feature Fix? yes Inode 98519 has compression flag set on filesystem without compression support. Clear? yes Inode 98519 has INDEX_FL flag set but is not a directory. Clear HTree index? what's the correct procedure?

    Read the article

  • Multithreading recommendation based on program description

    - by user260197
    I would like to describe some specifics of my program and get feedback on what the best multithreading model to use would be most applicable. I've spent a lot of time now reading on ThreadPool, Threads, Producer/Consumer, etc. and have yet to come to solid conclusions. I have a list of files (all the same format) but with different contents. I have to perform work on each file. The work consists of reading the file, some processing that takes about 1-2 minutes of straight number crunching, and then writing large output files at the end. I would like the UI interface to still be responsive after I initiate the work on the specified files. Some questions: What model/mechanisms should I use? Producer/Consumer, WorkPool, etc. Should I use a BackgroundWorker in the UI for responsiveness or can I launch the threading from within the Form as long as I leave the UI thread alone to continue responding to user input? How could I take results or status of each individual work on each file and report it to the UI in a thread safe way to give user feedback as the work progresses (there can be close to 1000 files to process) Update: Great feedback so far, very helpful. I'm adding some more details that are asked below: Output is to multiple independent files. One set of output files per "work item" that then themselves gets read and processed by another process before the "work item" is complete The work items/threads do not share any resources. The work items are processed in part using a unmanaged static library that makes use of boost libraries.

    Read the article

  • Using a Mac for cross platform development?

    - by mdec
    Who uses Macs for cross-platform development? By cross platform I essentially mean you can compile to target Windows or Unix (not necessarily both at the same time). I understand that this also has a lot to do with writing portable code, but I am more interested in people's experience with Mac OS X to develop software. I understand that there are a range of IDEs to choose from, I would probably use Eclipse (I like the GCC toolchain) however Xcode seems to be quite popular. Could it be used as described above? At a pinch I could always virtualise with VirtualBox or VMware Player or parallels to use Visual Studio (or dual boot for that matter). Having said that I am open to any other suggested compilers (with preferably an IDE that uses GCC.) Also with the range of Macs available, which one would you recommend? I would prefer a laptop (as I already have a desktop) but am unsure of reasonable specifications. If you are currently using a Mac to do development, I would love to hear what you develop on your Mac and what you like and don't like about it. I would primarily be developing in C/C++/Java. I am also looking to experiment with Boost and Qt, so I'm interested in hearing about any (potential) compatibility issues. If you have any other tips I'd love you hear what you have to say.

    Read the article

  • How do you keep all your languages straight?

    - by Chris Blackwell
    I think I'm going a little crazy. Right now, I'm working with the following languages (I was just doing a mental inventory): C++ - our game engine Assembler - low level debugging and a few co-processor specific routines Lua - our game engine scripting language HLSL - for shaders Python - our build system and utility tools Objective C/C++ - game engine platform code for Mac and iPhone C# - A few tools developed in our overseas office ExtendScript - Photoshop exporting tools ActionScript - UI scripting VBScript - some spreadsheet related stuff PHP - some web related stuff SQL - some web and tool related stuff On top of this are the plethora of API's that often have many different ways of doing the same thing: std library, boost, .NET, wxWidgets, Cocoa, Carbon, native script libraries for Python, Lua, etc, OpenGL, Direct3d, GDI, Aqua, augh! I find myself inadvertently conflating languages and api's, not realizing what I'm doing until I get syntax errors. I feel like I can't possibly keep up with it, and I can't possibly be proficient in all of these areas. Especially outside the realm of C++ and Python, I find myself programming more by looking at manuals that from memory. Do you have a similar problem? Ideas for compartmentalizing so you're more efficient? Deciding where you want to stay proficient? Organizational tips? Good ways to remember when you switch from Lua to C++ you need to start using semi-colons again? Rants on how complicated we programmers have made things for ourselves? Any ideas welcome!

    Read the article

  • C++ smart pointers: sharing pointers vs. sharing data

    - by Eli Bendersky
    In this insightful article, one of the Qt programmers tries to explain the different kinds of smart pointers Qt implements. In the beginning, he makes a distinction between sharing data and sharing the pointers themselves: First, let’s get one thing straight: there’s a difference between sharing pointers and sharing data. When you share pointers, the value of the pointer and its lifetime is protected by the smart pointer class. In other words, the pointer is the invariant. However, the object that the pointer is pointing to is completely outside its control. We don’t know if the object is copiable or not, if it’s assignable or not. Now, sharing of data involves the smart pointer class knowing something about the data being shared. In fact, the whole point is that the data is being shared and we don’t care how. The fact that pointers are being used to share the data is irrelevant at this point. For example, you don’t really care how Qt tool classes are implicitly shared, do you? What matters to you is that they are shared (thus reducing memory consumption) and that they work as if they weren’t. Frankly, I just don't undersand this explanation. There was a clarification plea in the article comments, but I didn't find the author's explanation sufficient. If you do understand this, please explain. What is this distinction, and how are other shared pointer classes (i.e. from boost or the new C++ standards) fit into this taxonomy? Thanks in advance

    Read the article

  • Lucene - querying with long strings

    - by Mikos
    I have an index, with a field "Affiliation", some example values are: "Stanford University School of Medicine, Palo Alto, CA USA", "Institute of Neurobiology, School of Medicine, Stanford University, Palo Alto, CA", "School of Medicine, Harvard University, Boston MA", "Brigham & Women's, Harvard University School of Medicine, Boston, MA" "Harvard University, Cambridge MA" and so on... (the bottom-line being the affiliations are written in multiple ways with no apparent consistency) I query the index on the affiliation field using say "School of Medicine, Stanford University, Palo Alto, CA" (with QueryParser) to find all Stanford related documents, I get a lot of false +ves, presumably because of the presence of School of Medicine etc. etc. (note: I cannot use Phrase query because of variability in the way affiliation is constructed) I have tried the following: Use a SpanNearQuery by splitting the search phrase with a whitespace (here I get no results!) Tried boosting (using ^) by splitting with the comma and boosting the last parts such as "Palo Alto CA" with a much higher boost than the initial phrases. Here I still get lots of false +ves. Any suggestions on how to approach this? If SpanNearQuery the way to go, Any ideas on why I get 0 results?

    Read the article

  • C++ thread safety - exchange data between worker and controller

    - by peterchen
    I still feel a bit unsafe about the topic and hope you folks can help me - For passing data (configuration or results) between a worker thread polling something and a controlling thread interested in the most recent data, I've ended up using more or less the following pattern repeatedly: Mutex m; tData * stage; // temporary, accessed concurrently // send data, gives up ownership, receives old stage if any tData * Send(tData * newData) { ScopedLock lock(m); swap(newData, stage); return newData; } // receiving thread fetches latest data here tData * Fetch(tData * prev) { ScopedLock lock(m); if (stage != 0) { // ... release prev prev = stage; stage = 0; } return prev; // now current } Note: This is not supposed to be a full producer-consumer queue, only the msot recent data is relevant. Also, I've skimmed ressource management somewhat here. When necessary I'm using two such stages: one to send config changes to the worker, and for sending back results. Now, my questions assuming that ScopedLock implements a full memory barrier: do stage and/or workerData need to be volatile? is volatile necessary for tData members? can I use smart pointers instead of the raw pointers - say boost::shared_ptr? Anything else that can go wrong? I am basically trying to avoid "volatile infection" spreading into tData, and minimize lock contention (a lock free implementation seems possible, too). However, I'm not sure if this is the easiest solution. ScopedLock acts as a full memory barrier. Since all this is more or less platform dependent, let's say Visual C++ x86 or x64, though differences/notes for other platforms are welcome, too. (a prelimenary "thanks but" for recommending libraries such as Intel TBB - I am trying to understand the platform issues here)

    Read the article

< Previous Page | 136 137 138 139 140 141 142 143 144 145 146 147  | Next Page >