Search Results

Search found 1600 results on 64 pages for 'exam revision'.

Page 51/64 | < Previous Page | 47 48 49 50 51 52 53 54 55 56 57 58  | Next Page >

  • Extend RAID 1 (HP SmartArray P410i) running Linux

    - by Oliver
    I took over a fairly simple server setup with the following RAID 1 config running Ubuntu 11.10 (Kernel 3.0.0-12-server x86_64): => ctrl all show config Smart Array P410i in Slot 0 (Embedded) (sn: removed) array A (SAS, Unused Space: 1335535 MB) logicaldrive 1 (279.4 GB, RAID 1, OK) physicaldrive 1I:1:1 (port 1I:box 1:bay 1, SAS, 1 TB, OK) physicaldrive 1I:1:2 (port 1I:box 1:bay 2, SAS, 1 TB, OK) Initially there were two 300GB disks that got replaced by 1TB disks and I now have to extend the logical volume to use that extra space. However, when trying to do so I get the following warning: => ctrl slot=0 ld 1 modify size=max Warning: Extension may not be supported on certain operating systems. Performing extension on these operating systems can cause data to become inaccessible. See ACU documentation for details. Continue? (y/n) Is it safe to say yes or am I at risk of corrupting the file system / loosing data? Rearranging and extending the file system afterwards shouldn't be an issue as I can take the server offline and boot from a gparted live disk. Here's the config of the RAID controller in use: => ctrl all show detail Smart Array P410i in Slot 0 (Embedded) Bus Interface: PCI Slot: 0 Serial Number: removed RAID 6 (ADG) Status: Disabled Controller Status: OK Hardware Revision: Rev C Firmware Version: 5.12 Rebuild Priority: Medium Expand Priority: Medium Surface Scan Delay: 15 secs Surface Scan Mode: Idle Wait for Cache Room: Disabled Surface Analysis Inconsistency Notification: Disabled Post Prompt Timeout: 0 secs Cache Board Present: False Drive Write Cache: Disabled SATA NCQ Supported: True And the partition table: Number Start End Size Type File system Flags 1 1049kB 274GB 274GB primary ext4 boot 2 274GB 300GB 25.8GB extended 5 274GB 300GB 25.8GB logical linux-swap(v1)

    Read the article

  • X.org - mouse gets stuck on press

    - by grawity
    I'm using Arch Linux. Very often, when I click on something, the OS thing sees the mouse press but not the release. If it was a link or file I clicked, moving the cursor would drag it too. Hammering the same mouse button again gives no effect. Usually, if I tap the touchpad (ALPS), the system finally sees both press and release of that, and I can continue working. (This might be because it uses a different driver - synaptics instead of evdev.) As you can imagine, this is quite annoying even for someone who spends 70% of his life in front of a terminal app. This is not a mouse issue - I'm on a laptop, and this affects both the Trackpoint thing and an external USB mouse. This is not a DE or window manager issue - I have used GNOME (with Metacity, Compiz and Xfwm4), Xfce (with Metacity and Xfwm4), mwm, twm, awesome, and wmii. Doesn't seem to be a hardware thing - after rebooting into Windows XP, everything works fine. hal is used for the auto-configuration of devices (as I have to disconnect the USB mouse often), so Xorg.conf really has nothing of relevance. Xorg -version shows: X.Org X Server 1.6.3.901 (1.6.4 RC 1) Release Date: 2009-8-25 X Protocol Version 11, Revision 0 If it changes anything, the laptop is stone-age Dell Latitude C840. I kinda suspect either hal or the evdev thing to cause it, but I really have no ideas on what to check further. In other words, HALP!#$ This thing is driving me nuts.

    Read the article

  • What does this mean: "SATP VMW_SATP_LOCAL does not support device configuration"?

    - by Jason Tan
    Can anyone tell me what this means in ESXi 5.1?: SATP VMW_SATP_LOCAL does not support device configuration I've googled it and I get a lot of results, but as yet all the pages that contain the string are discussing other matters. The storage array is a HDS HUS-VM and the hosts are HP b460c G8 blades with flex fabric and flex fabric VCs which I am in the process of commissioning and would like to get it started on the right foot - i.e. error and warning free! naa.600508b1001c56ee3d70da65f071da23 Device Display Name: HP Serial Attached SCSI Disk (naa.600508b1001c56ee3d70da65f071da23) Storage Array Type: VMW_SATP_LOCAL Storage Array Type Device Config: SATP VMW_SATP_LOCAL does not support device configuration. Path Selection Policy: VMW_PSP_FIXED Path Selection Policy Device Config: {preferred=vmhba0:C0:T0:L1;current=vmhba0:C0:T0:L1} Path Selection Policy Device Custom Config: Working Paths: vmhba0:C0:T0:L1 Is Local SAS Device: true Is Boot USB Device: false This is the same LUN: ~ # esxcli storage core device list -d naa.60060e80132757005020275700000016 naa.60060e80132757005020275700000016 Display Name: HITACHI Fibre Channel Disk (naa.60060e80132757005020275700000016) Has Settable Display Name: true Size: 204800 Device Type: Direct-Access Multipath Plugin: NMP Devfs Path: /vmfs/devices/disks/naa.60060e80132757005020275700000016 Vendor: HITACHI Model: OPEN-V Revision: 5001 SCSI Level: 2 Is Pseudo: false Status: degraded Is RDM Capable: true Is Local: false Is Removable: false Is SSD: false Is Offline: false Is Perennially Reserved: false Queue Full Sample Size: 0 Queue Full Threshold: 0 Thin Provisioning Status: unknown Attached Filters: VAAI_FILTER VAAI Status: supported Other UIDs: vml.020001000060060e801327570050202757000000164f50454e2d56 Is Local SAS Device: false Is Boot USB Device: false ~ #

    Read the article

  • svn post-commit not performing

    - by davin
    ive been sitting on this for about 7 hours, and ive aged close to 7 years... ahhh, server admin does that to me. i have svn wired through apache2 with webdav in the usual manner (basically like http://www.howtoforge.com/setting-up-subversion-with-webdav-post-commit-hook-and-multiple-sites-on-jaunty-jackalope-ubuntu-9.04). ive had endless problems with this (i didnt on my previous ubuntu server install, although this is ubuntu 10.10): this happened, and was fixed like in the post: http://stackoverflow.com/questions/2547400/how-do-you-fix-an-svn-409-conflict-error this looks like my issue, although its not my solution: http://serverfault.com/questions/135494/apache-svn-on-ubuntu-post-commit-hook-fails-silently-pre-commit-hook-permis my commit to svn works (finally). although the post-commit hook which is supposed to svn update the working copy of the repo on the server, doesn't work. the post-commit hook itself executes, and has sudo permissions (as in the setup url above. testing with whoami somelogfile.log or sudo whoami somelogfile.log shows www-data and root, respectively), although it wont perform the svn update (sudo svn update /var/www/gameServer /var/svn/gameServer.log). similar to the serverfault url above, when i perform the exact command it does update the working copy to the latest revision, just not through the post-commit hook. an age old question that is 90% of the time a permissions issue. but in pure frustration i chmod 777 lots of stuff not to mention the fact that www-data is in /etc/sudoer so it shouldnt even need that. im collapsing in front of the screen partly out of frustration and partly out of sleepiness. any direction would be appreciated.

    Read the article

  • Performance: Nginx SSL slowness or just SSL slowness in general?

    - by Mauvis Ledford
    I have an Amazon Web Services setup with an Apache instance behind Nginx with Nginx handling SSL and serving everything but the .php pages. In my ApacheBench tests I'm seeing this for my most expensive API call (which cache via Memcached): 100 concurrent calls to API call (http): 115ms (median) 260ms (max) 100 concurrent calls to API call (https): 6.1s (median) 11.9s (max) I've done a bit of research, disabled the most expensive SSL ciphers and enabled SSL caching (I know it doesn't help in this particular test.) Can you tell me why my SSL is taking so long? I've set up a massive EC2 server with 8CPUs and even applying consistent load to it only brings it up to 50% total CPU. I have 8 Nginx workers set and a bunch of Apache. Currently this whole setup is on one EC2 box but I plan to split it up and load balance it. There have been a few questions on this topic but none of those answers (disable expensive ciphers, cache ssl, seem to do anything.) Sample results below: $ ab -k -n 100 -c 100 https://URL This is ApacheBench, Version 2.3 <$Revision: 655654 $> Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/ Licensed to The Apache Software Foundation, http://www.apache.org/ Benchmarking URL.com (be patient).....done Server Software: nginx/1.0.15 Server Hostname: URL.com Server Port: 443 SSL/TLS Protocol: TLSv1/SSLv3,AES256-SHA,2048,256 Document Path: /PATH Document Length: 73142 bytes Concurrency Level: 100 Time taken for tests: 12.204 seconds Complete requests: 100 Failed requests: 0 Write errors: 0 Keep-Alive requests: 0 Total transferred: 7351097 bytes HTML transferred: 7314200 bytes Requests per second: 8.19 [#/sec] (mean) Time per request: 12203.589 [ms] (mean) Time per request: 122.036 [ms] (mean, across all concurrent requests) Transfer rate: 588.25 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 65 168 64.1 162 268 Processing: 385 6096 3438.6 6199 11928 Waiting: 379 6091 3438.5 6194 11923 Total: 449 6264 3476.4 6323 12196 Percentage of the requests served within a certain time (ms) 50% 6323 66% 8244 75% 9321 80% 9919 90% 11119 95% 11720 98% 12076 99% 12196 100% 12196 (longest request)

    Read the article

  • Dell Dimension 8400 Startup error

    - by Michael
    Hello all, thanks first for taking the time to read this and possibly help me...... now I am pretty decent of a computer tech...but not enough. I am having an issue with my computer which is running windows xp and as I mentioned it is a dell Dimension 8400. as soon as I power the system up the fan goes into hyper drive (spins like crazy and is very loud) then the start up screen with dell comes up and the loading bar gets stuck on the process of "Bios Revision A00" and never loads beyound that. I have read alot about it and think that the main problem was that it can not locate the file (which does have an updated version) I think it is A09. I can not enter safe mode, Bios mode or anything. I do have the file on my other computer and I was wondering if there is a way that I can use a usb flash drive (as I have read on other sites) to create a bootable MS-Dos diskette but I am failing at creating as such....is this possible? or is there anything else I can do? I tried to remove the battery from the system for about 10 minutes while it was completely unplug and tried then to reboot it and go into the bios menu but the same thing keeps happening....can anyone help me :-(

    Read the article

  • Jenkins shell command isn't executing

    - by Dmitro
    In Jenkins project I add command for executiong rm /var/www/ru.liveyurist.ru/tmp/* But when I build project I get error: Started by user anonymous Building in workspace /var/www/ru.myproject.ru Updating https://subversion.assembla.com/svn/myproject/trunk At revision 1168 no change for https://subversion.assembla.com/svn/liveexpert/trunk since the previous build [ru.myproject.ru] $ /bin/sh -xe /tmp/hudson7189633355149866134.sh FATAL: command execution failed java.io.IOException: Cannot run program "/bin/sh" (in directory "/var/www/ru.myproject.ru"): java.io.IOException: error=12, Cannot allocate memory at java.lang.ProcessBuilder.start(ProcessBuilder.java:475) at hudson.Proc$LocalProc.<init>(Proc.java:244) at hudson.Proc$LocalProc.<init>(Proc.java:216) at hudson.Launcher$LocalLauncher.launch(Launcher.java:709) at hudson.Launcher$ProcStarter.start(Launcher.java:338) at hudson.Launcher$ProcStarter.join(Launcher.java:345) at hudson.tasks.CommandInterpreter.perform(CommandInterpreter.java:82) at hudson.tasks.CommandInterpreter.perform(CommandInterpreter.java:58) at hudson.tasks.BuildStepMonitor$1.perform(BuildStepMonitor.java:19) at hudson.model.AbstractBuild$AbstractRunner.perform(AbstractBuild.java:703) at hudson.model.Build$RunnerImpl.build(Build.java:178) at hudson.model.Build$RunnerImpl.doRun(Build.java:139) at hudson.model.AbstractBuild$AbstractRunner.run(AbstractBuild.java:473) at hudson.model.Run.run(Run.java:1410) at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:46) at hudson.model.ResourceController.execute(ResourceController.java:88) at hudson.model.Executor.run(Executor.java:238) Caused by: java.io.IOException: java.io.IOException: error=12, Cannot allocate memory at java.lang.UNIXProcess.<init>(UNIXProcess.java:164) at java.lang.ProcessImpl.start(ProcessImpl.java:81) at java.lang.ProcessBuilder.start(ProcessBuilder.java:468) ... 16 more Build step 'Execute shell' marked build as failure Finished: FAILURE I started Jenkins from root user. Please advise what can be reason of this error?

    Read the article

  • Sandra reports my CPU as "Engineering Sample", how can I be sure this is correct?

    - by stevenvh
    I ran SiSoftware's Sandra on my new PC, and for my CPU it reports: Generation : G8 / T29 Name : TN0 (Trinity) FX/Opteron 32nm (ES) Revision/Stepping : 0 : 10 / 1 Stepping Mask : TN-A1 Microcode : MU6F10010F The (ES) is a well-known code in product development, meaning "Engineering Sample". Those are beta versions of the CPU, which still may contain some bugs, or even have features switches off. I contacted both the PC's manufacturer Medion as well as AMD about this. I had to downvote the Medion helpdesk here. The person I talked to boldly said Sandra was wrong (without knowing how Sandra got this information; he didn't even know the software), and used the word "impossible". His conclusion was "We’re not taking this in consideration for service”. Right. So, if you like Medion for their good prices, but like good support even better, you may consider buying your PC elsewhere. AMD was more helpful, but wanted to be sure before replacing the part (which I find reasonable). They suggested that I dismount the cooler from the CPU to check what was printed on it to be sure. I'm a bit reluctant here: I would have to wipe the thermal paste from the CPU, and won't know for sure my cooling will still be OK afterwards. Questions Has anybody actually found a confirmed ES CPU in her PC? Is anybody aware of Sandra erroneously reporting CPUs as Engineering Samples? How can you tell an ES, apart from the print on the package? Shouldn't Stepping Mask identify the CPU uniquely?

    Read the article

  • What does this diagnostic output mean?

    - by ChrisF
    I recently had a fault with my broadband connection. It turned out to be a fault with the ISP's or teleco's equipment. My ISP posted this diagnostic, but while I understand it in general, I'd like to to know more about the details. I'm assuming that ATM means Asynchronous Transfer Mode and PPP means Point to Point Protocol. It was this that my router was indicating as the fault. xDSL Status Test Summary Sync Status: Circuit In Sync General Information NTE Status: NTE Power Status: Unknown Bypass Status: Upstream DSL Link Information Downstream DSL Link Information Loop Loss: 9.0 17.0 SNR Margin: 25 15 Errored Seconds: 0 0 HEC Errors: 0 Cell Count: 0 0 Speed: 448 8128 TAM Status: Successfully executed operation Network Test: Sub-Test Results Layer Name Value Status Modem pass Transmitter Power (Upstream) 12.4 dBm Transmitter Power (Downstream) 8.8 dBm Upstream psd -38 dBm/Hz Downstream psd -51 dBm/Hz DSL pass Equipment Vendor Name TSTC Equipment Vendor Id n/a Equipment Vendor Revision n/a Training Time 8 s Num Syncs 1 Upstream bit rate 448 kbps Downstream bit rate 8128 kbps Upstream maximum bit rate 1108 kbps Downstream maximum bit rate 11744 kbps Upstream Attenuation 3.5 dB Downstream Attenuation 0.0 dB Upstream Noise Margin 20.0 dB Downstream Noise Margin 19.0 dB Local CRC Errors 0 Remote CRC Errors 0 Up Data Path interleaved Down Data Path interleaved Standard Used G_DMT INP INP Upstream Symbols n/a INP Upstream Delay 4 ms INP Upstream Depth 4 INP Downstream Symbols n/a INP Downstream Delay 5 ms INP Downstream Depth 32 ATM Reason: No ATM cells received fail Number of cells transmitted 30 Number of cells received 0 number of Near end HEC errors 0 number of Far end HEC errors n/a PPP Reason: No response from peer fail PAP authentication nottested CHAP authentication nottested (I'm not sure that Super User is the best place to ask this, but two people have suggested I ask it here so here I am).

    Read the article

  • error C2512 in precompiled header file?

    - by SoloMael
    I'm having a ridiculously strange problem. When I try to run the program below, there's an error message that says: "error C2512: 'Record' : no appropriate default constructor available". And when I double-click it, it directs me to a precompiled read-only header file named "xmemory0". Do they expect me to change a read-only file? Here's the segment of code in the file it directs me to: void construct(_Ty *_Ptr) { // default construct object at _Ptr ::new ((void *)_Ptr) _Ty(); // directs me to this line } Here's the program: #include <iostream> #include <vector> #include <string> using namespace std; const int NG = 4; // number of scores struct Record { string name; // student name int scores[NG]; double average; // Calculate the average // when the scores are known Record(int s[], double a) { double sum = 0; for(int count = 0; count != NG; count++) { scores[count] = s[count]; sum += scores[count]; } average = a; average = sum / NG; } }; int main() { // Names of the class string names[] = {"Amy Adams", "Bob Barr", "Carla Carr", "Dan Dobbs", "Elena Evans"}; // exam scores according to each student int exams[][NG]= { {98, 87, 93, 88}, {78, 86, 82, 91}, {66, 71, 85, 94}, {72, 63, 77, 69}, {91, 83, 76, 60}}; vector<Record> records(5); return 0; }

    Read the article

  • Creating an ec2 image on amazon fails at mkfs.ext3

    - by Dave Orr
    I'm trying to create an image of my ec2 instance in Amazon's cloud. It's been a bit of an adventure so far. I did manage to install Amazon's ec2-api-tools, which was harder than it seemed like it should have been. Then I ran: ec2-bundle-vol -d /mnt -k pk-{key}.pem -c cert-{cert}.pem -u {uid} -s 1536 Which returned: Copying / into the image file /mnt/image... Excluding: /sys/kernel/debug /sys/kernel/security /sys /proc /dev/pts /dev /dev /media /mnt /proc /sys /etc/udev/rules.d/70-persistent-net.rules /etc/udev/rules.d/z25_persistent-net.rules /mnt/image /mnt/img-mnt 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.00677357 s, 155 MB/s mkfs.ext3: option requires an argument -- 'L' Usage: mkfs.ext3 [-c|-l filename] [-b block-size] [-f fragment-size] [-i bytes-per-inode] [-I inode-size] [-J journal-options] [-G meta group size] [-N number-of-inodes] [-m reserved-blocks-percentage] [-o creator-os] [-g blocks-per-group] [-L volume-label] [-M last-mounted-directory] [-O feature[,...]] [-r fs-revision] [-E extended-option[,...]] [-T fs-type] [-U UUID] [-jnqvFKSV] device [blocks-count] ERROR: execution failed: "mkfs.ext3 -F /mnt/image -U 1c001580-9118-4a50-9a25-dcf02be6d25f -L " So mkfs.ext3 wants -L, which is a volume name. But ec2-bundle-vol doesn't seem to take in a volume name as an argument, and the docs (http://docs.amazonwebservices.com/AmazonEC2/gsg/2006-06-26/creating-an-image.html) don't seem to think one should be needed. Certainly their sample command: # ec2-bundle-vol -d /mnt -k ~root/pk-HKZYKTAIG2ECMXYIBH3HXV4ZBZQ55CLO.pem -u 495219933132 -s 1536 doesn't specify anything. So... any help? What am I missing?

    Read the article

  • how would you like computer science classes to be taught?

    - by aaa
    hello I am a graduate student now, and hopefully someday I will teach. my interests are C++, Python, embedded languages, and scientific computing. Meanwhile I daydream about how I would teach. I was not quite happy with my undergraduate university as I found many computer science classes lacking. so I would like to ask you, if you were a student, how would you like your computer science classes to be taught? I understand it is a very subjective question, but nevertheless I think it's important to know what people want. Some specific points I am interested in: should computer languages be taught explicitly, or should students be required to pick up language on their own? what is better for learning, tests, projects, some sort of take-home exam? how do you think classtime should be used? theory, introduction, explanations, etc.? do you think the group projects are important? how much about computer architecture do you want to learn in computer science class, not necessarily assembler class. should particular operating system/editor be mandated or encouraged? Thanks thank you for your comments. Question has been closed because it is a discussion question rather than Q&A. If you know appropriate website for discussions of such sort with low noise ratio, please let me know.

    Read the article

  • Ext3 fs: Block bitmap for group 1 not in group (block 0). is fs dead?

    - by ip
    My company has a server with one big partition with Mysql database and php files. Now this partition seems to be corrupted, as reported from kernel messages when I tried to mount it manually: [329862.817837] EXT3-fs error (device loop1): ext3_check_descriptors: Block bitmap for group 1 not in group (block 0)! [329862.817846] EXT3-fs: group descriptors corrupted! I've tried to recovery it running tools from a PLD livecd. These are the tools I have tested: - e2retrieve - testdisk - photorec - dd_rescue/dd_rhelp - ddrescue - fsck.ext2 - e2salvage without any success. dumpe2fs 1.41.3 (12-Oct-2008) Filesystem volume name: /dev/sda3 Last mounted on: <not available> Filesystem UUID: dd51610b-6de0-4392-a6f3-67160dbc0343 Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal filetype sparse_super Default mount options: (none) Filesystem state: not clean with errors Errors behavior: Continue Filesystem OS type: Linux Inode count: 9502720 Block count: 18987570 Reserved block count: 949378 Free blocks: 11555345 Free inodes: 11858398 First block: 0 Block size: 4096 Fragment size: 4096 Blocks per group: 32768 Fragments per group: 32768 Inodes per group: 16384 Inode blocks per group: 512 Last mount time: Wed Mar 24 09:31:03 2010 Last write time: Mon Apr 12 11:46:32 2010 Mount count: 10 Maximum mount count: 30 Last checked: Thu Jan 1 01:00:00 1970 Check interval: 0 (<none>) Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 128 Journal inode: 8 Journal backup: inode blocks dumpe2fs: A block group is missing an inode table while reading journal inode e2fsck 1.41.3 (12-Oct-2008) fsck.ext3: Group descriptors look bad... trying backup blocks... fsck.ext3: A block group is missing an inode table while checking ext3 journal for /dev/sda3 I tried also backup superblocks, same error result. There's any other tools I have to test before considering these disk definitely unrecoverable? Many thanks, ip

    Read the article

  • How to make Synaptics touchpad work better on Linux?

    - by whitequark
    I have Debian Squeeze currently installed on a Samsung N250 netbook with a Synaptics touchpad. These touchpads are, generally, good, and everything works perfectly on Windows. The support is extremely sucky on Linux through. Of course it has all the cool features like two-finger scrolling, but the cursor (or whatever is a replacement for cursor when scrolling) is trembling awfully. It trembles when I just keep the finger on touchpad, it shakes awfully if the finger is close to the top of touchpad, and when I'm scrolling with it (no matter with two fingers or one), the page shakes a lot too. None of this behavior is observed even in Windows XP with just the default drivers installed. Here's the Xorg version: $ Xorg -version X.Org X Server 1.7.7 Release Date: 2010-05-04 X Protocol Version 11, Revision 0 Build Operating System: Linux 2.6.32-5-686 i686 Debian Current Operating System: Linux mannaz 2.6.32-5-686 #1 SMP Fri Dec 10 16:12:40 UTC 2010 i686 Kernel command line: BOOT_IMAGE=/vmlinuz-2.6.32-5-686 root=/dev/mapper/mannaz-root ro quiet splash Build Date: 02 December 2010 01:08:37AM xorg-server 2:1.7.7-10 (Julien Cristau <[email protected]>) Current version of pixman: 0.16.4 and here is synclient -l output: http://pastebin.com/Eqa6hGXP

    Read the article

  • Most cost efficient way to backup Subversion data to S3?

    - by sludge
    I'm looking at using S3 as an offsite backup repo for my Subversion database. When I dump my SVN database, it's about 10 gigabytes. I would like to avoid the charge of uploading that data repeatedly. The anatomy of this large file such that new changes to Subversion modify the tail of the file, with everything else staying the same. Because Amazon S3 does not allow you to "patch" files with changes, I will have to upload ten gigs every time I instantiate a backup after doing a simple submit to Subversion. Here are the options as I see them: Option 1 I am looking at duplicity which has --volsize which splits data over an amount of megs. Is it possible to split the Subversion dumps using this so further incremental backups are measured in megabytes? Option 2 Can I just backup the hot subversion repository? This seems like a bad idea if it is in the middle of writing a submit. However, I have the option of taking the repo offline between the hours of midnight and 4am. Each revision in my Berkeley DB uses a file as its record.

    Read the article

  • Keep permalinks in Wordpress

    - by Remus Rigo
    Is there any way to set my permalinks to keep their exact link. If I have a post like this one http://blog.rigo.ro/?p=11, then I would like that every time I edit the post to keep this link. I have installed the Revision Control plugin and I set it to do not keep revisions. Any ideea how to do this? I want to keep this format of links. Edit: I took a look again, the permalinks keep their links, but every time I edit it adds a new version to the database and the next post will have a higher number. If I edit my current post for 3 times (blog.rigo.ro/?p=11) the next post will be blog.rigo.ro/?p=14. Now, my question is how can I keep all my post and edits clean, one post/more edits = one entry in the database, so if I have. 10 post on my site and I edit them, I would like that my permalinks to be from 1 to 10. PS: I don't want to edit my database manually, is there any plugin to do this?

    Read the article

  • Solution to easily share large files with non-tech-savvy users?

    - by Tim
    Hey all, We've got a server setup at work which we'd like to use to exchange large files with known clients easily. We're looking into software to facilitate this, but somewhow typing "large file hosting" into Google gives questionable results.. ;) We've come up with the following requirements, and I hope any of you can points us in the direction of a solution that offers this functionality, or is malleable to our needs. Synchronization / revision management is of no concern, it's mostly single large (up to 1+ GB) file uploads & downloads we'll need. We'd like to make the downloads expire & be removed after a certain number of days / downloads, to limit the amount of cleanup we'd have to do. The data files exchanged sometimes hold confidential information, so the URLs generated should be random and not publicly visible. Our users are of the less technically savvy variety, so a simple webform would be best over a desktop client (because we also have to support a mix of operating systems). As for use of the system we'd either like to send out generated random URLs for them to upload their files, or have an easy way manage & expire users. Works on a linux (Ubuntu) server (so nothing .Net-related please) Does anyone know of software that fits the above criteria? We've already seen a few instances of this within the scientific community, but nothing we could use directly.. Best regards, Tim

    Read the article

  • Solution to easily share large files with non-tech-savvy users?

    - by Tim
    Hey all, We've got a server setup at work which we'd like to use to exchange large files with known clients easily. We're looking into software to facilitate this, but somewhow typing "large file hosting" into Google gives questionable results.. ;) We've come up with the following requirements, and I hope any of you can points us in the direction of a solution that offers this functionality, or is malleable to our needs. Synchronization / revision management is of no concern, it's mostly single large (up to 1+ GB) file uploads & downloads we'll need. We'd like to make the downloads expire & be removed after a certain number of days / downloads, to limit the amount of cleanup we'd have to do. The data files exchanged sometimes hold confidential information, so the URLs generated should be random and not publicly visible. Our users are of the less technically savvy variety, so a simple webform would be best over a desktop client (because we also have to support a mix of operating systems). As for use of the system we'd either like to send out generated random URLs for them to upload their files, or have an easy way manage & expire users. Works on a linux (Ubuntu) server (so nothing .Net-related please) Does anyone know of software that fits the above criteria? We've already seen a few instances of this within the scientific community, but nothing we could use directly.. Best regards, Tim

    Read the article

  • SVN checkout returns 400 error

    - by eboix
    I'm trying to download the http://code.opencv.org/svn/opencv/trunk/ repository of all of the OpenCV source code - as specified in an OpenCV installation tutorial. In the tutorial, the repository https://code.ros.org/svn/opencv/trunk/ is used, but they moved it to http://code.opencv.org/svn/opencv/trunk/, and now you need a password to access the code.ros.org repository. Anyway, I'm using TortoiseSVN to download the SVN repository. (I get the same error with http://sourceforge.net/projects/win32svn/) I get this: Checkout from http://code.opencv.org/svn/opencv/trunk, revision HEAD, Fully recursive, Externals included Server sent unexpected return value (400 Bad request. Method Unknown) in response to REPORT request for '/svn/opencv/!svn/vcc/default' On the TortoiseSVN site I found something about this 400 error: You're behind a firewall which blocks DAV requests. Most firewalls do that. Either ask your Administrator to change the firewall, or access the repository with https:// instead of http:// like in https://svn.collab.net/repos/svn/ That way you connect to the repository with SSL encryption, which firewalls can't interfere with (if they don't block the SSL port completely). Also some virus scanners (i.e. Kapersky) are known to interfere and cause this error. The code.ros.org repository is https://, so I would be able to access it, but I need a password, so I can't. I made an account on ros.org, but it seems that I still need a password (which I don't know) to access the code repository. My username-password combination does not work. I unblocked all of the TortoiseSVN programs in my firewall settings. Nothing changed. I temporarily stopped my firewall to see if it was interfering with my request. I got the same error. How can I do an svn checkout http://code.opencv.org/svn/opencv/trunk/opencv/ so that I don't get this error? Is there any way to make it https://? Any help would be appreciated!

    Read the article

  • Ubuntu hard disk getting SATA errors

    - by Henadzy
    I am getting "UNC" errors on a hard disk on Ubuntu 9.10. It slows down my system, applications have not been responding for a long time. But when I mount the filesystem on another computer, it works properly. disk: SAMSUNG HD161HJ (SATA) syslog: Apr 25 00:28:25 vare6gin kernel: [ 885.773839] ata3.00: exception Emask 0x1 SAct 0x1e SErr 0x0 action 0x6 frozen Apr 25 00:28:25 vare6gin kernel: [ 885.773845] ata3.00: Ata error. fis:0x21 Apr 25 00:28:25 vare6gin kernel: [ 885.773861] ata3.00: cmd 60/08:08:3f:00:ad/00:00:10:00:00/40 tag 1 ncq 4096 in Apr 25 00:28:25 vare6gin kernel: [ 885.773864] res 51/40:24:67:c8:91/40:00:05:00:00/40 Emask 0x9 (media error) Apr 25 00:28:25 vare6gin kernel: [ 885.773871] ata3.00: status: { DRDY ERR } Apr 25 00:28:25 vare6gin kernel: [ 885.773877] ata3.00: error: { UNC } [...snip 3 similar repeats of last 4 lines; see revision history for full log...] Apr 25 00:28:25 vare6gin kernel: [ 885.773970] ata3: hard resetting link Apr 25 00:28:25 vare6gin kernel: [ 885.773974] ata3: nv: skipping hardreset on occupied port Apr 25 00:28:25 vare6gin kernel: [ 886.240073] ata3: SATA link up 3.0 Gbps (SStatus 123 SControl 300) Apr 25 00:28:25 vare6gin kernel: [ 886.256277] ata3.00: configured for UDMA/133 Apr 25 00:28:25 vare6gin kernel: [ 886.256305] ata3: EH complete Apr 25 00:28:27 vare6gin kernel: [ 888.176088] ata3: EH in SWNCQ mode,QC:qc_active 0xF sactive 0xF Apr 25 00:28:27 vare6gin kernel: [ 888.176099] ata3: SWNCQ:qc_active 0xF defer_bits 0x0 last_issue_tag 0x3 Apr 25 00:28:27 vare6gin kernel: [ 888.176102] dhfis 0xF dmafis 0x1 sdbfis 0x0 Apr 25 00:28:27 vare6gin kernel: [ 888.176109] ata3: ATA_REG 0x51 ERR_REG 0x40 Apr 25 00:28:27 vare6gin kernel: [ 888.176113] ata3: tag : dhfis dmafis sdbfis sacitve Apr 25 00:28:27 vare6gin kernel: [ 888.176120] ata3: tag 0x0: 1 1 0 1 Apr 25 00:28:27 vare6gin kernel: [ 888.176126] ata3: tag 0x1: 1 0 0 1 Apr 25 00:28:27 vare6gin kernel: [ 888.176131] ata3: tag 0x2: 1 0 0 1 Apr 25 00:28:27 vare6gin kernel: [ 888.176136] ata3: tag 0x3: 1 0 0 1

    Read the article

  • DPMS does not work: the monitor is not switched off

    - by bortzmeyer
    I have a monitor which was properly switched off by my Debian PC when unused. I attached it to another machine and, this times, it is never switched off. In /etc/X11/xorg.conf, I have: Section "Monitor" Identifier "Generic Monitor" Option "DPMS" It is recognized when X11 starts: (II) Loading extension DPMS ... (II) VESA(0): DPMS capabilities: StandBy Suspend Off; RGB/Color Display ... (**) Option "dpms" (**) VESA(0): DPMS enabled The operating system is Debian stable "lenny". The graphics card is: 00:02.0 VGA compatible controller: Intel Corporation 82G33/G31 Express Integrate d Graphics Controller (rev 02) (prog-if 00 [VGA controller]) Subsystem: Hewlett-Packard Company Device 2a6f Flags: bus master, fast devsel, latency 0, IRQ 5 Memory at fe900000 (32-bit, non-prefetchable) [size=512K] I/O ports at b080 [size=8] Memory at d0000000 (32-bit, prefetchable) [size=256M] Memory at fe800000 (32-bit, non-prefetchable) [size=1M] Capabilities: [90] Message Signalled Interrupts: Mask- 64bit- Queue=0/0 Enable-Capabilities: [d0] Power Management version 2 X11 is: X.Org X Server 1.4.2 Release Date: 11 June 2008 X Protocol Version 11, Revision 0 Build Operating System: Linux Debian (xorg-server 2:1.4.2-10.lenny2) Current Operating System: Linux ludwigVII 2.6.26-2-686 #1 SMP Sun Jun 21 04:57:3 8 UTC 2009 i686 Build Date: 08 June 2009 09:12:57AM

    Read the article

  • Learning Basic Mathematics

    - by NeedsToKnow
    I'm going to just come out and say it. I'm 20 and can't do maths. Two years ago I passed the end-of-high-school mathematics exam (but not at school), and did pretty well. Since then, I haven't done a scrap of mathematics. I wondered just how bad I had gotten, so I was looking at some simple algebra problems. You know, the kind you learn halfway through highschool. 5(-3x - 2) - (x - 3) = -4(4x + 5) + 13 Couldn't do them. I've got half a year left until I start a Computer Science undergraduate degree. I love designing and creating programs, and I remember I loved mathematics back when I did it. Basically, I've had a pretty bad education, but I want to be knowledgable in these areas. I was thinking of buying some high school textbooks and reading them, but I'm not sure this is the right way to go. I need to start off at some basic level and work towards a greater understanding. My question is: What should I study, how should I study, and what books can you recommend? Thanks!

    Read the article

  • Sudden and frequent hangs on desktop computer: mobo or CPU fault?

    - by djechelon
    I have a desktop computer equipped with an ASUS Crosshair 2 Formula and a Phenom x6 3.2GHz CPU. My problem is that often the computer will hang all of a sudden, completely stopping responding. When that occurs, reset key is inoperative and power button turns the computer off but is unable to turn it back on. I have to physically disconnect power cable. The problem can occur anytime, when I'm booting Windows, when I'm logging in, when I'm listening to a song, when I'm browsing Internet, etc. It always occurs after very few minutes of 3D gameplay I thought it was a video card fault. I had 3 8800GTX so I could try all combinations of them: didn't fix I thought it was a RAM problem: I tried running with only a subset of my DDR2 banks but didn't fix. Almost every time I have to reset and reconfigure BIOS (without AHCI, Win7 won't boot, so I need to restore a few things). If I enable AMD Live, Cool&Quiet or other things from CPU configuration menu I'll be sure that the computer won't reach Windows desktop in 99% of cases (it randomly hangs somewhere in the boot process or even in the BIOS POST). Another interesting thing is that during the POST process the computer always takes unusually long time detecting USB devices (LCD POSTer shows USB INIT), and I've also tried disconnecting all USB devices but didn't take less time to POST BIOS revision is 2702, the latest. Today I found a different behaviour once: during boot screen I got a BSOD with error Stop 0x00000101 A clock interrupt was not received on a secondary processor within the allocated time interval, and this is usually related to overclocking, but I never overclocked my CPU. Judging from the description of my problem, hoping someone had the same and fixed, and since I don't have a spare CPU or motherboard for replacement, I'd like to ask if you think this is a problem with faulty CPU or faulty motherboard, and if I can perform additional tests (I mean software tests because of my lack of spare components) to identify the component to replace.

    Read the article

  • Hard Disk Not Counting Reallocated Sectors

    - by MetaNova
    I have a drive that is reporting that the current pending sectors is "45". I have used badblocks to identify the sectors and I have been trying to write zeros to them with dd. From what I understand, when I attempt writing data directly to the bad sectors, it should trigger a reallocation, reducing current pending sectors by one and increasing the reallocated sector count. However, on this disk both Reallocated_Sector_Ct and Reallocated_Event_Count raw values are 0, and dd fails with I/O errors when I attempt to write zeros to the bad sectors. dd works fine, however, when I write to a good sector. # dd if=/dev/zero of=/dev/sdb bs=512 count=1 seek=217152 dd: error writing ‘/dev/sdb’: Input/output error Does this mean that my drive, in some way, has no spare sectors to be used for reallocation? Is my drive just in general a terrible person? (The drive isn't actually mine, I'm helping a friend out. They might have just gotten a cheap drive or something.) In case it is relevant, here is the output of smartctl -i : Model Family: Western Digital Caviar Green (AF) Device Model: WDC WD15EARS-00Z5B1 Serial Number: WD-WMAVU3027748 LU WWN Device Id: 5 0014ee 25998d213 Firmware Version: 80.00A80 User Capacity: 1,500,301,910,016 bytes [1.50 TB] Sector Size: 512 bytes logical/physical Device is: In smartctl database [for details use: -P show] ATA Version is: ATA8-ACS (minor revision not indicated) SATA Version is: SATA 2.6, 3.0 Gb/s Local Time is: Fri Oct 18 17:47:29 2013 CDT SMART support is: Available - device has SMART capability. SMART support is: Enabled UPDATE: I have run shred on the disk, which has caused Current_Pending_Sector to go to zero. However, Reallocated_Sector_Ct and Reallocated_Event_Count are still zero, and dd is now able to write data to the sectors it was previously unable to. This leads me with several other questions: Why aren't the reallocations being recored by the disk? I'm assuming the reallocation took place as I can now write data directly to the sector and couldn't before. Why did shred cause reallocation and not dd? Does the fact that shred writes random data instead of just zeros make a difference?

    Read the article

  • Accepts Nested Attributes For - edit form displays incorrect number of items ( + !map:ActiveSupport:

    - by Brightbyte8
    Hi, I have a teacher profile model which has many subjects (separate model). I want to add subjects to the profile on the same form for creating/editing a profile. I'm using accepts_nested_attributes for and this works fine for creation. However on the edit page I am getting a very strange error - instead of seeing 3 subjects (I added three at create and a look into the console confirms this), I see 12 subjects(!). #Profile model class Profile < ActiveRecord::Base has_many :subjects accepts_nested_attributes_for :subjects end #Subject Model class Subject < ActiveRecord::Base belongs_to :profile end #Profile Controller (only showing deviations from normal RESTFUL setup) def new @profile = Profile.new 3.times do @profile.subjects.build end end #Here's 1 of three parts of the subject output of = debug @profile errors: !ruby/object:ActiveRecord::Errors base: *id004 errors: !map:ActiveSupport::OrderedHash {} subjects: - &id001 !ruby/object:Subject attributes: exam: Either name: "7" created_at: 2010-04-15 10:38:13 updated_at: 2010-04-15 10:38:13 level: Either id: "31" profile_id: "3" attributes_cache: {} # Note that 3 of these attributes are displayed despite me seeing 12 subjects on screen Other info in case it's relevant. Rails: 2.3.5 Ruby 1.8.7 p149 HAML I've never had so much difficulty with a bug before - I've already lost about 8 hours to it. Would really appreciate any help! Thanks to any courageous takers Jack

    Read the article

< Previous Page | 47 48 49 50 51 52 53 54 55 56 57 58  | Next Page >