Search Results

Search found 2280 results on 92 pages for 'tmp'.

Page 65/92 | < Previous Page | 61 62 63 64 65 66 67 68 69 70 71 72  | Next Page >

  • php, curl , php curl , multipart/form-data , upload picture redirect

    - by Michael
    I'm trying to upload some pictures using php cURL on a classified ad website .I think that I set all the parameters properly but I see that there is a kind of redirect after I post the picture . The issue is that the url where I'm getting redirected gives 404 error instead to return the html that it does when I make the post with a normal browser . here is the php code that I have so far " $URL = "http://api.classistatic.com/api/image/upload"; $s = "PAD001"; $v = "2"; $n = "k"; $a = "1:a126581b8150ddc1337cabce28f2feb53849fd143bd6e42649f90175c0e023e3"; $u = "@/var/www/html/artwork/tmp/!BszBLV!EGk~$(KGrHqEOKicEvMi8HVg(BL5ZbWvs0g~~_1.JPG"; $htmlContent = $baseClass-processPicturerequest($URL, $s, $v, $b, $n, $a, $u); The log from server is as following : http://pastebin.com/gZqPgsFX

    Read the article

  • Bash/shell script - shell output redirection inside a function

    - by Josh
    function grabSourceFile { cd /tmp/lmpsource wget $1 > $LOG baseName=$(basename $1) tar -xvf $baseName > $LOG cd $baseName } When I call this function The captured output is not going to the log file. The output redirection works fine until I call the function. The $LOG variable is set at the top of the file. I tried echoing statements and they would not print. I am guessing the function captures the output itself? If so how do push the output to the file instead of the console. (The wget above prints to console, while an echo inside the function does nothing.)

    Read the article

  • "NOT_SUPPORTED_BY_GUI" Exception in JCo

    - by cedar715
    We are having a BAPI that uploads the specified document to SAP. The BAPI accept three parameters: ID, FILE_LOC and FOLDER_NAME. And I'm setting the values as follows in the JCo code: JCO.ParameterList paramList = function.getImportParameterList(); paramList.setValue("101XS1", "EXTERNAL_ID"); paramList.setValue("tmp", "FOLDER_NAME"); paramList.setValue("D:/upload/foo.txt", "FILE_LOCATION"); But when I'm trying to execute the BAPI, am getting the following exception: com.sap.mw.jco.JCO$Exception: (104) RFC_ERROR_SYSTEM_FAILURE: Exception condition "NOT_SUPPORTED_BY_GUI" raised. at com.sap.mw.jco.rfc.MiddlewareRFC$Client.nativeExecute(Native Method) at com.sap.mw.jco.rfc.MiddlewareRFC$Client.execute(MiddlewareRFC.java:1242) at com.sap.mw.jco.JCO$Client.execute(JCO.java:3816) at com.sap.mw.jco.JCO$Client.execute(JCO.java:3261) The same BAPI is working fine if I execute through thick client(SAP Logon). But through JCo, its giving this error.

    Read the article

  • URL to load resources from the classpath in Java

    - by Thilo
    In Java, you can load all kinds of resources using the same API but with different URL protocols: file:///tmp.txt http://127.0.0.1:8080/a.properties jar:http://www.foo.com/bar/baz.jar!/COM/foo/Quux.class This nicely decouples the actual loading of the resource from the application that needs the resource, and since a URL is just a String, resource loading is also very easily configurable. Is there a protocol to load resources using the current classloader? This is similar to the Jar protocol, except that I do not need to know which jar file or class folder the resource is coming from. I can do that using Class.getResourceAsStream("a.xml"), of course, but that would require me to use a different API, and hence changes to existing code. I want to be able to use this in all places where I can specify a URL for the resource already, by just updating a property file.

    Read the article

  • RCov started analyzing loaded libs (including Rdoc itself) – when using rvm (Ruby Version Manager)

    - by phvalues
    Context rcov 0.9.8 2010-02-28 ruby 1.8.7 (2009-06-12 patchlevel 174) [i686-darwin10.3.0] rvm 0.1.38 by Wayne E. Seguin ([email protected]) [http://rvm.beginrescueend.com/] System Ruby (rvm use system): ruby 1.8.7 (2010-01-10 patchlevel 249) [i686-darwin10] Files The test setup is a 'lib' folder containing a single file which defines a class, the folders 'test' and 'test/sub_test', with 'sub_test' containing the single 'test_example_lib.rb' and a Rakefile like this: require 'rcov/rcovtask' task :default = [:rcov] desc "RCov" Rcov::RcovTask.new do | t | t.test_files = FileList[ 'test/**/test_*.rb' ] end Result #rake (in /Users/stephan/tmp/rcov_example) rm -r coverage Loaded suite /Users/stephan/.rvm/gems/ruby-1.8.7-p174/bin/rcov Started . Finished in 0.000508 seconds. 1 tests, 2 assertions, 0 failures, 0 errors +----------------------------------------------------+-------+-------+--------+ | File | Lines | LOC | COV | +----------------------------------------------------+-------+-------+--------+ |...ms/rcov-0.9.8/lib/rcov/code_coverage_analyzer.rb | 271 | 156 | 5.1% | |...ems/rcov-0.9.8/lib/rcov/differential_analyzer.rb | 116 | 82 | 9.8% | |lib/example_lib.rb | 16 | 11 | 72.7% | +----------------------------------------------------+-------+-------+--------+ |Total | 403 | 249 | 9.6% | +----------------------------------------------------+-------+-------+--------+ 9.6% 3 file(s) 403 Lines 249 LOC Question Why is RCov itself analysed here? I'd expect that (and it doesn't happen when using 'rvm use system'). In fact it seems to be due to me using a Ruby installed via rvm.

    Read the article

  • Using HandVU with OpenCV on Mac OS X 10.6.3

    - by 23tux
    Hi, I've tried to user HandVU with OpenCV but when I tried to run "hvOpenCV config/default.conductor" I get a "Segmentation fault". Anybody know this problem? macbook:handvu-beta3 User$ hvOpenCV config/default.conductor will load conductor from file: config/default.conductor Segmentation fault I installed OpenCV through http://opencv.willowgarage.com/wiki/Mac_OS_X_OpenCV_Port on Mac OS X 10.6.3, and HandVU through http://www.movesinstitute.org/~kolsch/HandVu/doc/InstallationLinux.html#source I think it's a problem with opencv, because if I'm trying to run the peopledetect example, I get a segmentation fault too. macbook:c User$ ./peopledetect pic1.png Segmentation fault And if I try to run the facedetect sample I get an error too: macbook:c User$ ./facedetect --cascade="../../haarcascades/haarcascade_frontalface_alt.xml" Xlib: extension "RANDR" missing on display "/tmp/launch-WUMho1/org.x:0". Hope someone can help! thx, tux

    Read the article

  • Implementation of ceil function in C.

    - by RBA
    Hi, I have two doubts regarding ceil function.. 1. The ceil() function is implemented in C. if i use ceil(3/2), it works fine.. But when i use ceil(count/2)-if value of count is 3, then it gives compile time error.. /tmp/ccA4Yj7p.o(.text+0x364): In function FrontBackSplit': : undefined reference toceil' collect2: ld returned 1 exit status How to use the ceil function in second case?? Please suggest.. 2. How can i implement my own ceil function in C. Please give some basic guidelines. Thanks.

    Read the article

  • PHP stream_context_set_option SSL certificate as string

    - by Roger Thomas
    I've got a weird issue. Basically, I need to do this: $handle = stream_context_create(); stream_context_set_option($handle , 'ssl', 'local_cert', '/tmp/cert'); However. The certificate is not held as a file within the server. Rather it's an encrypted string held in a clustered database environment. So instead of the certificate being a file name pointer, its the physical content of the certificate. So instead of using the file name, I need to specify the content of the certificate instead. For example: $cert = '-----BEGIN CERTIFICATE-----.... upWbwmdMd61SjNCdtOpZcNW3YmzuT96Fr7GUPiDQ -----END CERTIFICATE-----'; Does anyone have any idea whatsoever how on earth I can do this? I'm scratching my head over this problem, but my gut instinct says it is doable. Thanks in advance everyone!

    Read the article

  • session variables lost between pages or use same variables

    - by user222333
    Hi, Yesterday I learn many thing from you, especially from Marc and my problem was solved ( session variables lost between pages or use same variables ). But now I continue asking: I don't want to use Session ID(session.use_trans_sid = 1) between pages. But also I don't want to use same session variables for different users at same application and also I don't want lost session variables between pages for same user. Is it possible? If yes how? Thanks for everybody for any help. Best regards. I have Wamp Server(2.2.11) with PHP(5.2.9.-2). My php.ini's session settings at below: [Session] session.save_handler = files session.save_path = "c:/wamp/tmp" session.use_cookies = 0 ;session.cookie_secure = ;session.name = PHPSESSID session.auto_start = 0 session.cookie_lifetime = 0 session.cookie_path = / session.cookie_domain = session.cookie_httponly = session.serialize_handler = php session.gc_probability = 1 session.gc_divisor = 1000 session.gc_maxlifetime = 1440 session.bug_compat_42 = 0 session.bug_compat_warn = 1 session.referer_check = session.entropy_length = 0 session.entropy_file = ;session.entropy_length = 16 ;session.entropy_file = /dev/urandom session.cache_limiter = nocache session.cache_expire = 180 session.use_trans_sid = 1 session.hash_function = 0 session.hash_bits_per_character = 5 url_rewriter.tags = "a=href,area=href,frame=src,input=src,form=fakeentry"

    Read the article

  • Java apache poi setting the cell formula

    - by Ray
    I am try to set a cell formulat that references cells from other workbooks. However, when I open the programmatically generated workbook, the formula cells show up as #REF!. I print out the formulas that were generated in a log. If I cut and paste those into the cells, the numbers from the external workbooks is pulled in. String formula = "'C:\\tmp\\ForecastAggregate\\Total Products\\[ForecastWorksheet.xls]2010 Budget'!C10"; HSSFCell cell = row.createCell(0); //row was created above cell.setCellFormula(formula); Can anybode help?

    Read the article

  • Play video in UIWebView from NSData

    - by Papagalli
    Hello there, Does anybody know how to play a video in a UIVebView ? The video comes from the iphone library, when I pick it, I store it in core data as binary data. If i use the local url of the video it's working fine, the problem I have is that the this url refers to a folder named "tmp", and I don't know the lifetime of it. If I can get the real url of the video file it will be ok. //attachment is the core data object containing the data NSURL *url = [NSURL URLWithString:attachment.path]; [self.webView loadRequest:[NSURLRequest requestWithURL:url]]; I tried a solution that doesn't work: (i tried MIMETypes : "video/quicktime" and "video/mp4") [self.webView loadData:attachment.data MIMEType:@"video/mov" textEncodingName:nil baseURL:nil]; I think the second solution is the closest, and the problem comes from the MIMEType. Thanks by advance if anybody has a solution for this :)

    Read the article

  • Running a batch file from Perl (Activestate perl in Windows)

    - by Jithesh
    I have a Perl program which does something like below: #!perl use strict; use warnings; my $exe = "C:\\project\\set_env_and_run.bat"; my $arg1 = "\\\\Server\\share\\folder1"; my $arg2 = "D:\\output\\folder1"; my $cmd = "$exe \"$arg1\" \"$arg2\""; my $status = system("$cmd > c:\\tmp\out.txt 2>&1"); print "$status\n"; I am calling this Perl code in an eval block. When invoked, i get the status printed as 0, but the batch file has not actually executed. What would be the reason for this? Thanks, Jits

    Read the article

  • Python-MySQLdb problem: wrong ELF class: ELFCLASS32

    - by jsalonen
    As part of trying out django CMS (http://www.django-cms.org/), I'm struggling with getting Python-MySQLdb to work (http://pypi.python.org/pypi/MySQL-python/). I have installed Django CMS and all of its dependencies (Python 2.5, Django, django-south, MySQL server) I'm trying out the example code within Django CMS code with MySQL as chosen database type When I execute python manage.py syncdb, the following error occurs: django.core.exceptions.ImproperlyConfigured: Error loading MySQLdb module: /root/.python-eggs/MySQL_python-1.2.3c1-py2.5-linux-i686.egg-tmp/_mysql.so: wrong ELF class: ELFCLASS32 I have been able to trace the problem specifically to python-mySQLdb (as also visible in the stack trace). Other than that, I am completely puzzled. I don't have a clue what ELFCLASS32 means, or what ELF class is anyway. I suspect that this error could have something to do with the fact that I am running 64-bit version of Debian 5 (on a VPS). Any good ideas how to troubleshoot?

    Read the article

  • gitosis did not generate projects.list automatically, gitweb can't work.

    - by Readon Shaw
    I setup a gitosis managed git server. git clone is ok. but when I set gitweb via gitweb.conf as below: $projectroot = "/srv/gitosis/repositories"; $git_temp = "/tmp"; $home_text = "indextext.html"; $projects_list = "/srv/gitosis/gitosis/projects.list"; $stylesheet = "/gitweb/gitweb.css"; $logo = "/gitweb/git-logo.png"; $favicon = "/gitweb/git-favicon.png"; Btw, the commet was deleted because of the special symbol # is using as bold prefix. "403 Forbidden - No projects found" is reported when I access gitweb through "http://localhost/cgi-bin/gitweb.cgi" I checked the projects.list file it is empty, is that the reason why gitweb access failed? what would be the correct content? can i add it manually?

    Read the article

  • How do I export a large table into 50 smaller csv files of 100,000 records each

    - by Eddie
    I am trying to export one field from a very large table - containing 5,000,000 records, for example - into a csv list - but not all together, rather, 100,000 records into each .csv file created - without duplication. How can I do this, please? I tried SELECT field_name FROM table_name WHERE certain_conditions_are_met INTO OUTFILE /tmp/name_of_export_file_for_first_100000_records.csv LINES TERMINATED BY '\n' LIMIT 0 , 100000 that gives the first 100000 records, but nothing I do has the other 4,900,000 records exported into 49 other files - and how do I specify the other 49 filenames? for example, I tried the following, but the SQL syntax is wrong: SELECT field_name FROM table_name WHERE certain_conditions_are_met INTO OUTFILE /home/user/Eddie/name_of_export_file_for_first_100000_records.csv LINES TERMINATED BY '\n' LIMIT 0 , 100000 INTO OUTFILE /home/user/Eddie/name_of_export_file_for_second_100000_records.csv LINES TERMINATED BY '\n' LIMIT 100001 , 200000 and that did not create the second file... what am I doing wrong, please, and is there a better way to do this? Should the LIMIT 0 , 100000 be put Before the first INTO OUTFILE statement, and then repeat the entire command from SELECT for the second 100,000 records, etc? Thanks for any help. Eddie

    Read the article

  • Mono mkbundle issue

    - by Sean
    Hello, I am trying to bundle mono runtime with a C# a simple (console) program that does nothing, but 'hello world'. Got Cygwin, configured it all, though it fails: $ mkbundle -o x2 x.exe --deps -z OS is: Windows Sources: 1 Auto-dependencies: True embedding: C:\cygwin\home\Sean\x.exe compression ratio: 31.71% embedding: C:\PROGRA~2\MONO-2~1.1\lib\mono\4.0\mscorlib.dll compression ratio: 34.68% Compiling: as -o temp.o temp.s gcc -mno-cygwin -g -o x2 -Wall temp.c `pkg-config --cflags --libs mono-2|dos2unix` -lz temp.o temp.c: In function `main': temp.c:173: warning: implicit declaration of function `g_utf16_to_utf8' temp.c:173: warning: assignment makes pointer from integer without a cast temp.c:188: warning: assignment makes pointer from integer without a cast /tmp/ccu8fTcQ.o: In function `main': /home/Sean/temp.c:173: undefined reference to `_g_utf16_to_utf8' /home/Sean/temp.c:188: undefined reference to `_g_utf16_to_utf8' collect2: ld returned 1 exit status [Fail] Tried post with similar problem to mine here, but it didn't work either. My configuration: Windows 7 64bit Mono 2.8.1 Latest Cygwin Not sure what's the story here. Help appreciated.

    Read the article

  • Reverse Engineering Custom Data File

    - by kerchingo
    At my place of work we have a legacy document management system that is now unsupported by the developers due to variours reasons, i have been asked to look into extracting the documents contained in this system to eventually be imported into a new 3rd party system. From various tracing and process monitoring I have determined that the document images (mainly tiff files) are stored in a number of 1.5gb files, these files seem to be read from a speficic offset and then written to a tmp file that is then served via a web app to the client and then deleted. I guess i am looking for suggestions as how I can inspect these large files that containn the tiff images and eventually extract and write them to individual files.

    Read the article

  • Change present working directory of a calling shell from a ruby script

    - by Erik Kastman
    I'm writing a simple ruby sandbox command-line utility to copy and unzip directories from a remote filesystem to a local scratch directory in order to unzip them and let users edit the files. I'm using Dir.mktmpdir as the default scratch directory, which gives a really ugly path (for example: /var/folders/zz/zzzivhrRnAmviuee+++1vE+++yo/-Tmp-/d20100311-70034-abz5zj) I'd like the last action of the copy-and-unzip script to cd the calling shell into the new scratch directory so people can access it easily, but I can't figure out how to change the PWD of the calling shell. One possibility is to have the utility print out the new path to stdout and then run the script as part of a subshell (i.e. cd $(sandbox my_dir) ), but I want to print out progress on the copy-and-unzipping since it can take up to 10 minutes, so this won't work. Should I just have it go to a pre-determined, easy-to-find scratch directory? Does anyone have a better suggestion? Thanks in advance for your help. -Erik

    Read the article

  • using a "temporary files" folder in python

    - by zubin71
    I recently wrote a script which queries PyPI and downloads a package; however, the package gets downloaded to a user defined folder. I`d like to modify the script in such a way that my downloaded files go into a temporary folder, if the folder is not specified. The temporary-files folder in *nix machines is "/tmp" ; would there be any Python method I could use to find out the temporary-files folder in a particular machine? If not, could someone suggest an alternative to this problem?

    Read the article

  • apt-get update mdadm scary warnings

    - by user568829
    Just ran an apt-get update on one of my dedicated servers to be left with a relatively scary warning: Processing triggers for initramfs-tools ... update-initramfs: Generating /boot/initrd.img-2.6.26-2-686-bigmem W: mdadm: the array /dev/md/1 with UUID c622dd79:496607cf:c230666b:5103eba0 W: mdadm: is currently active, but it is not listed in mdadm.conf. if W: mdadm: it is needed for boot, then YOUR SYSTEM IS NOW UNBOOTABLE! W: mdadm: please inspect the output of /usr/share/mdadm/mkconf, compare W: mdadm: it to /etc/mdadm/mdadm.conf, and make the necessary changes. W: mdadm: the array /dev/md/2 with UUID 24120323:8c54087c:c230666b:5103eba0 W: mdadm: is currently active, but it is not listed in mdadm.conf. if W: mdadm: it is needed for boot, then YOUR SYSTEM IS NOW UNBOOTABLE! W: mdadm: please inspect the output of /usr/share/mdadm/mkconf, compare W: mdadm: it to /etc/mdadm/mdadm.conf, and make the necessary changes. W: mdadm: the array /dev/md/6 with UUID eef74de5:9267b2a1:c230666b:5103eba0 W: mdadm: is currently active, but it is not listed in mdadm.conf. if W: mdadm: it is needed for boot, then YOUR SYSTEM IS NOW UNBOOTABLE! W: mdadm: please inspect the output of /usr/share/mdadm/mkconf, compare W: mdadm: it to /etc/mdadm/mdadm.conf, and make the necessary changes. W: mdadm: the array /dev/md/5 with UUID 5d45b20c:04d8138f:c230666b:5103eba0 W: mdadm: is currently active, but it is not listed in mdadm.conf. if W: mdadm: it is needed for boot, then YOUR SYSTEM IS NOW UNBOOTABLE! W: mdadm: please inspect the output of /usr/share/mdadm/mkconf, compare W: mdadm: it to /etc/mdadm/mdadm.conf, and make the necessary changes. As instructed I inspected the output of /usr/share/mdadm/mkconf and compared with /etc/mdadm/mdadm.conf and they are quite different. Here is the /etc/mdadm/mdadm.conf contents: # mdadm.conf # # Please refer to mdadm.conf(5) for information about this file. # # by default, scan all partitions (/proc/partitions) for MD superblocks. # alternatively, specify devices to scan, using wildcards if desired. DEVICE partitions # auto-create devices with Debian standard permissions CREATE owner=root group=disk mode=0660 auto=yes # automatically tag new arrays as belonging to the local system HOMEHOST <system> # instruct the monitoring daemon where to send mail alerts MAILADDR root # definitions of existing MD arrays ARRAY /dev/md0 level=raid1 num-devices=2 UUID=b93b0b87:5f7c2c46:0043fca9:4026c400 ARRAY /dev/md1 level=raid1 num-devices=2 UUID=c0fa8842:e214fb1a:fad8a3a2:28f2aabc ARRAY /dev/md2 level=raid1 num-devices=2 UUID=cdc2a9a9:63bbda21:f55e806c:a5371897 ARRAY /dev/md3 level=raid1 num-devices=2 UUID=eca75495:9c9ce18c:d2bac587:f1e79d80 # This file was auto-generated on Wed, 04 Nov 2009 11:32:16 +0100 # by mkconf $Id$ And here is the out put from /usr/share/mdadm/mkconf # mdadm.conf # # Please refer to mdadm.conf(5) for information about this file. # # by default, scan all partitions (/proc/partitions) for MD superblocks. # alternatively, specify devices to scan, using wildcards if desired. DEVICE partitions # auto-create devices with Debian standard permissions CREATE owner=root group=disk mode=0660 auto=yes # automatically tag new arrays as belonging to the local system HOMEHOST <system> # instruct the monitoring daemon where to send mail alerts MAILADDR root # definitions of existing MD arrays ARRAY /dev/md1 UUID=c622dd79:496607cf:c230666b:5103eba0 ARRAY /dev/md2 UUID=24120323:8c54087c:c230666b:5103eba0 ARRAY /dev/md5 UUID=5d45b20c:04d8138f:c230666b:5103eba0 ARRAY /dev/md6 UUID=eef74de5:9267b2a1:c230666b:5103eba0 # This configuration was auto-generated on Sat, 25 Feb 2012 13:10:00 +1030 # by mkconf 3.1.4-1+8efb9d1+squeeze1 As I understand it I need to replace the four lines that start with 'ARRAY' in the /etc/mdadm/mdadm.conf file with the different four 'ARRAY' lines from the /usr/share/mdadm/mkconf output. When I did this and then ran update-initramfs -u there were no more warnings. Is what I have done above correct? I am now terrified of rebooting the server for fear it will not reboot and being a remote dedicated server this would certainly mean downtime and possibly would be expensive to get running again. FOLLOW UP (response to question): the output from mount: /dev/md1 on / type ext3 (rw,usrquota,grpquota) tmpfs on /lib/init/rw type tmpfs (rw,nosuid,mode=0755) proc on /proc type proc (rw,noexec,nosuid,nodev) sysfs on /sys type sysfs (rw,noexec,nosuid,nodev) udev on /dev type tmpfs (rw,mode=0755) tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev) devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=620) /dev/md2 on /boot type ext2 (rw) /dev/md5 on /tmp type ext3 (rw) /dev/md6 on /home type ext3 (rw,usrquota,grpquota) mdadm --detail /dev/md0 mdadm: md device /dev/md0 does not appear to be active. mdadm --detail /dev/md1 /dev/md1: Version : 0.90 Creation Time : Sun Aug 14 09:43:08 2011 Raid Level : raid1 Array Size : 31463232 (30.01 GiB 32.22 GB) Used Dev Size : 31463232 (30.01 GiB 32.22 GB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 1 Persistence : Superblock is persistent Update Time : Sat Feb 25 14:03:47 2012 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 UUID : c622dd79:496607cf:c230666b:5103eba0 Events : 0.24 Number Major Minor RaidDevice State 0 8 1 0 active sync /dev/sda1 1 8 17 1 active sync /dev/sdb1 mdadm --detail /dev/md2 /dev/md2: Version : 0.90 Creation Time : Sun Aug 14 09:43:09 2011 Raid Level : raid1 Array Size : 104320 (101.89 MiB 106.82 MB) Used Dev Size : 104320 (101.89 MiB 106.82 MB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 2 Persistence : Superblock is persistent Update Time : Sat Feb 25 13:20:20 2012 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 UUID : 24120323:8c54087c:c230666b:5103eba0 Events : 0.30 Number Major Minor RaidDevice State 0 8 2 0 active sync /dev/sda2 1 8 18 1 active sync /dev/sdb2 mdadm --detail /dev/md3 mdadm: md device /dev/md3 does not appear to be active. mdadm --detail /dev/md5 /dev/md5: Version : 0.90 Creation Time : Sun Aug 14 09:43:09 2011 Raid Level : raid1 Array Size : 2104448 (2.01 GiB 2.15 GB) Used Dev Size : 2104448 (2.01 GiB 2.15 GB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 5 Persistence : Superblock is persistent Update Time : Sat Feb 25 14:09:03 2012 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 UUID : 5d45b20c:04d8138f:c230666b:5103eba0 Events : 0.30 Number Major Minor RaidDevice State 0 8 5 0 active sync /dev/sda5 1 8 21 1 active sync /dev/sdb5 mdadm --detail /dev/md6 /dev/md6: Version : 0.90 Creation Time : Sun Aug 14 09:43:09 2011 Raid Level : raid1 Array Size : 453659456 (432.64 GiB 464.55 GB) Used Dev Size : 453659456 (432.64 GiB 464.55 GB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 6 Persistence : Superblock is persistent Update Time : Sat Feb 25 14:10:00 2012 State : active Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 UUID : eef74de5:9267b2a1:c230666b:5103eba0 Events : 0.31 Number Major Minor RaidDevice State 0 8 6 0 active sync /dev/sda6 1 8 22 1 active sync /dev/sdb6 FOLLOW UP 2 (response to question): Output from /etc/fstab /dev/md1 / ext3 defaults,usrquota,grpquota 1 1 devpts /dev/pts devpts mode=0620,gid=5 0 0 proc /proc proc defaults 0 0 #usbdevfs /proc/bus/usb usbdevfs noauto 0 0 /dev/cdrom /media/cdrom auto ro,noauto,user,exec 0 0 /dev/dvd /media/dvd auto ro,noauto,user,exec 0 0 # # # /dev/md2 /boot ext2 defaults 1 2 /dev/sda3 swap swap pri=42 0 0 /dev/sdb3 swap swap pri=42 0 0 /dev/md5 /tmp ext3 defaults 0 0 /dev/md6 /home ext3 defaults,usrquota,grpquota 1 2

    Read the article

  • Exception with Subsonic 2.2, SQLite and Migrations

    - by Holger Amann
    Hi, I'm playing with Migrations and created a simple migration like public class Migration001 : Migration { public override void Up() { TableSchema.Table testTable = CreateTableWithKey("TestTable"); } public override void Down() { } } after executing sonic.exe migrate I'm getting the following output: Setting ConfigPath: 'App.config' Building configuration from c:\tmp\MigrationTest\MigrationTest\App.config Adding connection to SQLiteProvider Found 1 migration files Current DB Version is 0 Migrating to 001_Init (1) There was an error running migration (001_Init): SQLite error near "IDENTITY": syntax error Stack Trace: at System.RuntimeMethodHandle._InvokeMethodFast(Object target, Object[] argum ents, SignatureStruct& sig, MethodAttributes methodAttributes, RuntimeTypeHandle typeOwner) at System.RuntimeMethodHandle.InvokeMethodFast(Object target, Object[] argume nts, Signature sig, MethodAttributes methodAttributes, RuntimeTypeHandle typeOwn er) at System.Reflection.RuntimeMethodInfo.Invoke(Object obj, BindingFlags invoke Attr, Binder binder, Object[] parameters, CultureInfo culture, Boolean skipVisib ilityChecks) at System.Reflection.RuntimeMethodInfo.Invoke(Object obj, BindingFlags invoke Attr, Binder binder, Object[] parameters, CultureInfo culture) at SubSonic.CodeRunner.RunAndExecute(ICodeLanguage lang, String sourceCode, S tring methodName, Object[] parameters) in D:\@SubSonic\SubSonic\SubSonic.Migrati ons\CodeRunner.cs:line 95 at SubSonic.Migrations.Migrator.ExecuteMigrationCode(String migrationFile) in D:\@SubSonic\SubSonic\SubSonic.Migrations\Migrator.cs:line 177 at SubSonic.Migrations.Migrator.Migrate() in D:\@SubSonic\SubSonic\SubSonic.M igrations\Migrator.cs:line 141 Any hints?

    Read the article

  • High Server Load cannot figure out why

    - by Tim Bolton
    My server is currently running CentOS 5.2, with WHM 11.34. Currently, we're at 6.43 to 12 for a load average. The sites that we're hosting are taking a lot time to respond and resolve. top doesn't show anything out of the ordinary and iftop doesn't show a lot of traffic. We have many resellers, and some not so good at writing code, how can we find the culprit? vmstat output: vmstat procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------ r b swpd free buff cache si so bi bo in cs us sy id wa st 0 2 84 78684 154916 1021080 0 0 72 274 0 14 6 3 80 12 0 top output (ordered by %CPU) top - 21:44:43 up 5 days, 10:39, 3 users, load average: 3.36, 4.18, 4.73 Tasks: 222 total, 3 running, 219 sleeping, 0 stopped, 0 zombie Cpu(s): 5.8%us, 2.3%sy, 0.2%ni, 79.6%id, 11.8%wa, 0.0%hi, 0.2%si, 0.0%st Mem: 2074580k total, 1863044k used, 211536k free, 174828k buffers Swap: 2040212k total, 84k used, 2040128k free, 987604k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 15930 mysql 15 0 138m 46m 4380 S 4 2.3 1:45.87 mysqld 21772 igniteth 17 0 23200 7152 3932 R 4 0.3 0:00.02 php 1586 root 10 -5 0 0 0 S 2 0.0 11:45.19 kjournald 21759 root 15 0 2416 1024 732 R 2 0.0 0:00.01 top 1 root 15 0 2156 648 560 S 0 0.0 0:26.31 init 2 root RT 0 0 0 0 S 0 0.0 0:00.35 migration/0 3 root 34 19 0 0 0 S 0 0.0 0:00.32 ksoftirqd/0 4 root RT 0 0 0 0 S 0 0.0 0:00.00 watchdog/0 5 root RT 0 0 0 0 S 0 0.0 0:02.00 migration/1 6 root 34 19 0 0 0 S 0 0.0 0:00.11 ksoftirqd/1 7 root RT 0 0 0 0 S 0 0.0 0:00.00 watchdog/1 8 root RT 0 0 0 0 S 0 0.0 0:01.29 migration/2 9 root 34 19 0 0 0 S 0 0.0 0:00.26 ksoftirqd/2 10 root RT 0 0 0 0 S 0 0.0 0:00.00 watchdog/2 11 root RT 0 0 0 0 S 0 0.0 0:00.90 migration/3 12 root 34 19 0 0 0 R 0 0.0 0:00.20 ksoftirqd/3 13 root RT 0 0 0 0 S 0 0.0 0:00.00 watchdog/3 top output (ordered by CPU time) top - 21:46:12 up 5 days, 10:41, 3 users, load average: 2.88, 3.82, 4.55 Tasks: 217 total, 1 running, 216 sleeping, 0 stopped, 0 zombie Cpu(s): 3.7%us, 2.0%sy, 2.0%ni, 67.2%id, 25.0%wa, 0.0%hi, 0.1%si, 0.0%st Mem: 2074580k total, 1959516k used, 115064k free, 183116k buffers Swap: 2040212k total, 84k used, 2040128k free, 1090308k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ TIME COMMAND 32367 root 16 0 215m 212m 1548 S 0 10.5 62:03.63 62:03 tailwatchd 1586 root 10 -5 0 0 0 S 0 0.0 11:45.27 11:45 kjournald 1576 root 10 -5 0 0 0 S 0 0.0 2:37.86 2:37 kjournald 27722 root 16 0 2556 1184 800 S 0 0.1 1:48.94 1:48 top 15930 mysql 15 0 138m 46m 4380 S 4 2.3 1:48.63 1:48 mysqld 2932 root 34 19 0 0 0 S 0 0.0 1:41.05 1:41 kipmi0 226 root 10 -5 0 0 0 S 0 0.0 1:34.33 1:34 kswapd0 2671 named 25 0 74688 7400 2116 S 0 0.4 1:23.58 1:23 named 3229 root 15 0 10300 3348 2724 S 0 0.2 0:40.85 0:40 sshd 1580 root 10 -5 0 0 0 S 0 0.0 0:30.62 0:30 kjournald 1 root 17 0 2156 648 560 S 0 0.0 0:26.32 0:26 init 2616 root 15 0 1816 576 480 S 0 0.0 0:23.50 0:23 syslogd 1584 root 10 -5 0 0 0 S 0 0.0 0:18.67 0:18 kjournald 4342 root 34 19 27692 11m 2116 S 0 0.5 0:18.23 0:18 yum-updatesd 8044 bollingp 15 0 3456 2036 740 S 1 0.1 0:15.56 0:15 imapd 26 root 10 -5 0 0 0 S 0 0.0 0:14.18 0:14 kblockd/1 7989 gmailsit 16 0 3196 1748 736 S 0 0.1 0:10.43 0:10 imapd iostat -xtk 1 10 output [root@server1 tmp]# iostat -xtk 1 10 Linux 2.6.18-53.el5 12/18/2012 Time: 09:51:06 PM avg-cpu: %user %nice %system %iowait %steal %idle 5.83 0.19 2.53 11.85 0.00 79.60 Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 1.37 118.83 18.70 54.27 131.47 692.72 22.59 4.90 67.19 3.10 22.59 sdb 0.35 39.33 20.33 61.43 158.79 403.22 13.75 5.23 63.93 3.77 30.80 Time: 09:51:07 PM avg-cpu: %user %nice %system %iowait %steal %idle 1.50 0.00 0.50 24.00 0.00 74.00 Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.00 25.00 2.00 2.00 128.00 108.00 118.00 0.03 7.25 4.00 1.60 sdb 0.00 16.00 41.00 145.00 200.00 668.00 9.33 107.92 272.72 5.38 100.10 Time: 09:51:08 PM avg-cpu: %user %nice %system %iowait %steal %idle 2.00 0.00 1.50 29.50 0.00 67.00 Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.00 95.00 3.00 33.00 12.00 480.00 27.33 0.07 1.72 1.31 4.70 sdb 0.00 14.00 1.00 228.00 4.00 960.00 8.42 143.49 568.01 4.37 100.10 Time: 09:51:09 PM avg-cpu: %user %nice %system %iowait %steal %idle 13.28 0.00 2.76 21.30 0.00 62.66 Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.00 21.00 1.00 19.00 16.00 192.00 20.80 0.06 3.55 1.30 2.60 sdb 0.00 36.00 28.00 181.00 124.00 884.00 9.65 121.16 617.31 4.79 100.10 Time: 09:51:10 PM avg-cpu: %user %nice %system %iowait %steal %idle 4.74 0.00 1.50 25.19 0.00 68.58 Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.00 20.00 3.00 15.00 12.00 136.00 16.44 0.17 7.11 3.11 5.60 sdb 0.00 0.00 103.00 60.00 544.00 248.00 9.72 52.35 545.23 6.14 100.10 Time: 09:51:11 PM avg-cpu: %user %nice %system %iowait %steal %idle 1.24 0.00 1.24 25.31 0.00 72.21 Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.00 75.00 4.00 28.00 16.00 416.00 27.00 0.08 3.72 2.03 6.50 sdb 2.00 9.00 124.00 17.00 616.00 104.00 10.21 3.73 213.73 7.10 100.10 Time: 09:51:12 PM avg-cpu: %user %nice %system %iowait %steal %idle 1.00 0.00 0.75 24.31 0.00 73.93 Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.00 24.00 1.00 9.00 4.00 132.00 27.20 0.01 1.20 1.10 1.10 sdb 4.00 40.00 103.00 48.00 528.00 212.00 9.80 105.21 104.32 6.64 100.20 Time: 09:51:13 PM avg-cpu: %user %nice %system %iowait %steal %idle 2.50 0.00 1.75 23.25 0.00 72.50 Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.00 125.74 3.96 46.53 15.84 689.11 27.92 0.20 4.06 2.41 12.18 sdb 2.97 0.00 91.09 84.16 419.80 471.29 10.17 85.85 590.78 5.66 99.11 Time: 09:51:14 PM avg-cpu: %user %nice %system %iowait %steal %idle 0.75 0.00 0.50 24.94 0.00 73.82 Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.00 88.00 1.00 7.00 4.00 380.00 96.00 0.04 4.38 3.00 2.40 sdb 3.00 7.00 111.00 44.00 540.00 208.00 9.65 18.58 581.79 6.46 100.10 Time: 09:51:15 PM avg-cpu: %user %nice %system %iowait %steal %idle 11.03 0.00 3.26 26.57 0.00 59.15 Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.00 145.00 7.00 53.00 28.00 792.00 27.33 0.15 2.50 1.55 9.30 sdb 1.00 0.00 155.00 0.00 800.00 0.00 10.32 2.85 18.63 6.46 100.10 [root@server1 tmp]# MySQL Show Full Processlist mysql> show full processlist; +------+---------------+-----------+-----------------------+----------------+------+----------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Id | User | Host | db | Command | Time | State | Info | +------+---------------+-----------+-----------------------+----------------+------+----------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | 1 | DB_USER_ONE | localhost | DB_ONE | Query | 3 | waiting for handler insert | INSERT DELAYED INTO defers (mailtime,msgid,email,transport_method,message,host,ip,router,deliveryuser,deliverydomain) VALUES(FROM_UNIXTIME('1355879748'),'1TivwL-0003y8-8l','[email protected]','remote_smtp','SMTP error from remote mail server after initial connection: host mx1.mail.tw.yahoo.com [203.188.197.119]: 421 4.7.0 [TS01] Messages from 75.125.90.146 temporarily deferred due to user complaints - 4.16.55.1; see http://postmaster.yahoo.com/421-ts01.html','mx1.mail.tw.yahoo.com','203.188.197.119','lookuphost','','') | | 2 | DELAYED | localhost | DB_ONE | Delayed insert | 52 | insert | | | 3 | DELAYED | localhost | DB_ONE | Delayed insert | 68 | insert | | | 911 | DELAYED | localhost | DB_ONE | Delayed insert | 99 | Waiting for INSERT | | | 993 | DB_USER_TWO | localhost | DB_TWO | Sleep | 832 | | NULL | | 994 | DB_USER_ONE | localhost | DB_ONE | Query | 185 | Locked | delete from failures where FROM_UNIXTIME(UNIX_TIMESTAMP(NOW())-1296000) > mailtime | | 1102 | DB_USER_THREE | localhost | DB_THREE | Query | 29 | NULL | commit | | 1249 | DB_USER_FOUR | localhost | DB_FOUR | Query | 13 | NULL | commit | | 1263 | root | localhost | DB_FIVE | Query | 0 | NULL | show full processlist | | 1264 | DB_USER_SIX | localhost | DB_SIX | Query | 3 | NULL | commit | +------+---------------+-----------+-----------------------+----------------+------+----------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 10 rows in set (0.00 sec)

    Read the article

  • How to get SVN to ignore Mac "._" files?

    - by Dave Child
    My development server is accessed by several OSX users, and their OS tends to leave lots of unnecessary files around the place, all starting with dot underscore ("._"). I know OSX can be told not to create these on network drives, but they still sneak in. I'd like SVN to ignore anything starting with "._", but I can't seem to get it to work, even though it looks like it should be simple. I've added "._*" to the SVN global ignore pattern, but SVN is still trying to add and commit these files. Can anyone tell me what I'm doing wrong? My full SVN ignore pattern is: global-ignores = *.o *.lo *.la *.al .libs *.so *.so.[0-9]* *.a *.pyc *.pyo *.rej *~ #*# .#* .*.swp .DS_Store Thumbs.db ._* *.bak *.tmp nbproject I don't know if it makes any difference, but I'm trying to set this on both Ubuntu and Ubuntu server by editing the /etc/subversion/config file.

    Read the article

  • PHP: Convert curl_exec output to UTF8

    - by Paul Tarjan
    I would like to only work with UTF8. The problem is I don't know the charset of every webpage. How can I detect it and convert to UTF8? <?php $url = "http://vkontakte.ru"; $ch = curl_init($url); $options = array( CURLOPT_RETURNTRANSFER => true, ); curl_setopt_array($ch, $options); $data = curl_exec($ch); // $data = magic($data); print $data; See this at: http://paulisageek.com/tmp/curl-utf8 What is magic()?

    Read the article

  • Smooth Error in qplot from ggplot2

    - by Jared
    I have some data that I am trying to plot faceted by its Type with a smooth (Loess, LM, whatever) superimposed. Generation code is below: testFrame <- data.frame(Time=sample(20:60,50,replace=T),Dollars=round(runif(50,0,6)),Type=sample(c("First","Second","Third","Fourth"),50,replace=T,prob=c(.33,.01,.33,.33))) I have no problem either making a faceted plot, or plotting the smooth, but I cannnot do both. The first three lines of code below work fine. The fourth line is where I have trouble: qplot(Time,Dollars,data=testFrame,colour=Type) qplot(Time,Dollars,data=testFrame,colour=Type) + geom_smooth() qplot(Time,Dollars,data=testFrame) + facet_wrap(~Type) qplot(Time,Dollars,data=testFrame) + facet_wrap(~Type) + geom_smooth() It gives the following error: Error in [<-.data.frame(*tmp*, var, value = list(NA = NULL)) : missing values are not allowed in subscripted assignments of data frames What am I missing to overlay a smooth in a faceted plot? I could have sworn I had done this before, possibly even with the same data.

    Read the article

< Previous Page | 61 62 63 64 65 66 67 68 69 70 71 72  | Next Page >