Search Results

Search found 9788 results on 392 pages for 'character limit'.

Page 351/392 | < Previous Page | 347 348 349 350 351 352 353 354 355 356 357 358  | Next Page >

  • Oracle Blob as img src in PHP page

    - by menkes
    I have a site that currently uses images on a file server. The images appear on a page where the user can drag and drop each as is needed. This is done with jQuery and the images are enclosed in a list. Each image is pretty standard: <img src='//network_path/image.png' height='80px'> Now however I need to reference images stored as a BLOB in an Oracle database (no choice on this, so not a merit discussion). I have no problem retrieving the BLOB and displaying on it's own using: $sql = "SELECT image FROM images WHERE image_id = 123"; $stid = oci_parse($conn, $sql); oci_execute($stid); $row = oci_fetch_array($stid, OCI_ASSOC+OCI_RETURN_NULLS); $img = $row['IMAGE']->load(); header("Content-type: image/jpeg"); print $img; But I need to [efficiently] get that image as the src attribute of the img tag. I tried imagecreatefromstring() but that just returns the image in the browser, ignoring the other html. I looked at data uri, but the IE8 size limit rules that out. So now I am kind of stuck. My searches keep coming up with using a src attribute that loads another page that contains the image. But I need the image itself to actually show on the page. (Note: I say image, meaning at least one image but as many as eight on a page). Any help would be greatly appreciated.

    Read the article

  • .Net file writing and string splitting issues

    - by sagar
    I have a requirement where the file should be split using a given character. Default splitting options are CRLF and LF In both these cases I am splitting the line by \r\n and \r respectively. Also I have requirement where any size of file should be processed. (Processing is basically inserting the given string in a file at given position). For this I am reading the file in chunk of 1024 bytes. Then I am applying the string.Split() method. Split() method gives options for ignoring white spaces and none. I have to add back these line break characters to the line. for this I am using a binary writer and I am writing the byte array to the new file. Issue:- 1) When line break is CRLF, and the split option is NONE, while spaces are also added in the splitted array. Second option is given (to ignore white spaces) CRLF works properly. 2)Bit ignoring white space option creates other problems, as I am reading the file byte by byte I can't ignore a white space. 3)When line break characters are other than default(e.g. '|', a null value is prepended to the resulting line. Can anybody give solution to my issues?

    Read the article

  • isalpha(<mychar>) == true evaluates to false??

    - by Buttink
    string temp is equal to "ZERO:\t.WORD\t1" from my debugger. (the first line of my file) string temp = RemoveWhiteSpace(data); int i = 0; if ( temp.length() > 0 && isalpha(temp[0]) ) cout << "without true worked" << endl; if ( temp.length() > 0 && isalpha(temp[0]) == true ) cout << "with true worked" << endl; This is my code to check if first character of temp is a a-z,A-Z. The first if statement will evaluate to true and the 2nd to false. WHY?!?!?! I have tried this even without the "temp.length() 0 &&" and it still evaluates false. It just hates the "== true". The only thing I can think of is that isalpha() returns != 0 and true == 1. Then, you could get isalpha() == 2 != 1. But, I have no idea if C++ is that ... weird. BTW, I dont need to know that the "== true" is logically pointless. I know. output was without true worked Compiled with CodeBlock using GNU GCC on Ubuntu 9.10 (if this matters any)

    Read the article

  • How to move image in Applet?

    - by user1609804
    I want to move the character left, right up, and down in applet, but it is not moving at all. here is my code, help please import javax.swing.JPanel; import java.awt.image.BufferedImage; import java.io.*; import javax.imageio.ImageIO; import java.applet.*; import java.awt.event.*; import java.awt.*; public class drawCenter extends Applet { private int x,y;// the x and y of the position of the player private BufferedImage image, pos; public void init( ) { try { image = ImageIO.read(new File("pokemonCenter.png")); pos = ImageIO.read(new File("player/maleInGame.png")); } catch (IOException ex) { } x = 150; y = 171; } public void keyPressed(KeyEvent e) { int keyCode = e.getKeyCode(); switch( keyCode ) { case KeyEvent.VK_UP: if( y>0 ) { y=y-19; repaint(); } break; case KeyEvent.VK_DOWN: if( y<171 ) { y=y+19; repaint(); } break; case KeyEvent.VK_LEFT:if( x>0 ) { x=x-15; repaint(); } break; case KeyEvent.VK_RIGHT:if( x<285 ) { x=x+15; repaint(); } break; } e.consume(); } public void keyReleased(){ } public void paint( Graphics g ) { g.drawImage(image, 0, 0, null); g.drawImage(pos, x, y, null); } }

    Read the article

  • Truncating a file while it's being used (Linux)

    - by Hobo
    I have a process that's writing a lot of data to stdout, which I'm redirecting to a log file. I'd like to limit the size of the file by occasionally copying the current file to a new name and truncating it. My usual techniques of truncating a file, like cp /dev/null file don't work, presumably because the process is using it. Is there some way I can truncate the file? Or delete it and somehow associate the process' stdout with a new file? FWIW, it's a third party product that I can't modify to change its logging model. EDIT redirecting over the file seems to have the same issue as the copy above - the file returns to its previous size next time it's written to: ls -l sample.log ; echo > sample.log ; ls -l sample.log ; sleep 10 ; ls -l sample.log -rw-rw-r-- 1 user group 1291999 Jun 11 2009 sample.log -rw-rw-r-- 1 user group 1 Jun 11 2009 sample.log -rw-rw-r-- 1 user group 1292311 Jun 11 2009 sample.log

    Read the article

  • Cacti: "An internal Net-Snmp error condition detected in Cacti snmp_count"

    - by Recc
    There's the odd forum topic about an error similarly obscure as this, but I haven't seen any for snmp_count in particular. Also I don't see graphing problems, though I can't simply go and eyeball all graphs. However the poller does time out and has to be stopped by its internal process preventing overruns. If I filter out the flood of this error in the log I dont get anything else except the poller timeout: 06/12/2014 12:48:00 PM - POLLER: Poller[0] Maximum runtime of 58 seconds exceeded. Exiting. 06/12/2014 12:48:00 PM - SYSTEM STATS: Time:58.8566 Method:spine Processes:1 Threads:40 Hosts:1923 HostsPerProcess:1923 DataSources:61584 RRDsProcessed:0 06/12/2014 12:48:00 PM - SPINE: Poller[0] ERROR: Spine Timed Out While Processing Hosts Internal I saw in the running processes /usr/local/spine/spine 0 2053 that's always left behind. When I kill it the flooding of the error stops. Of course it's the same on the next poll run as it goes through the devices. 2053 is apparently the DB ID for a device. I deleted it completely to see if that stops it. It doesn't, instead 2052 is seen there. I suspect It'll be the same if I keep deleting devices which I will not do. This started happening midday when I wasn't doing anything to the cacti server. I have tried reducing Maximum Threads per Process to 1 and Number of PHP Script Servers to 1. I've been running it at 10 script servers / 40 threads for months with poll cycle time of about 20 sec. I just found out Running snmpwalk on any host would begin returning the values but then timeout halfway through. This doesn't happen from different servers on the network this Cacti is suggesting still that it's a problem with it locally. Any suggestions? For one polling cycle I changed to use cmd.php instead. then I started getting errors like CMDPHP: Poller[0] Host[45] DS[541] WARNING: Result from SNMP not valid. Partial Result: U Perhaps as expected. Looking closely I see that every snmpwalk I do is interrupted at the same place as if some byte limit is hit and the connection torn down.

    Read the article

  • DHCP "no answer" on CentOS 6.4

    - by Kev
    I installed a DHCP server (yum install dhcp) and this is my conf: # create new # specify domain name option domain-name "mydomain.name"; # specify DNS's hostname or IP address option domain-name-servers 10.0.1.1, 10.0.1.2; option ntp-servers 10.0.1.1, 10.0.1.2; allow unknown-clients; # default lease time default-lease-time 2628000; # max lease time max-lease-time 2628000; # about a month # this DHCP server to be declared valid authoritative; # specify network address and subnet mask subnet 10.0.0.0 netmask 255.0.0.0 { # specify the range of lease IP address range dynamic-bootp 10.0.2.1 10.0.2.50; # specify broadcast address option broadcast-address 10.255.255.255; # specify default gateway option routers 10.0.0.1; allow unknown-clients; } service dhcp start reports [ OK ]. Yet, if I disable my other DHCP server (Win2k3) and get a client to try renewing its IP lease, it times out. So I installed dhcping. No matter what options I try, including directing dhcping at my server, adding a client address in the range, adding my hardware address, it replies 'no answer'. I'm also trying -i since that seems to be more akin to what a WinXP client would try to do, based on /var/log/messages. It logs the attempts (from dhcping here) as: Oct 24 18:55:13 newdc dhcpd: DHCPINFORM from 10.0.2.15 via eth0:4 Oct 24 18:55:13 newdc dhcpd: DHCPACK to 10.0.2.15 (00:11:25:66:4e:7f) via eth0:4 Oct 24 18:55:13 newdc dhcpd: DHCPINFORM from 10.0.2.15 via eth0:3 Oct 24 18:55:13 newdc dhcpd: DHCPACK to 10.0.2.15 (00:11:25:66:4e:7f) via eth0:3 Oct 24 18:55:13 newdc dhcpd: DHCPINFORM from 10.0.2.15 via eth0 Oct 24 18:55:13 newdc dhcpd: DHCPACK to 10.0.2.15 (00:11:25:66:4e:7f) via eth0 The :3 and :4 are because I have a few extra Host(A) records for this server so it responds on more than one IP for our intranet app. No answer? It sounds like it should be getting three answers...no? (And if that's the problem, how do I limit the DHCP service to replying from eth0?)

    Read the article

  • free open-source linux screenshot & ocr tool

    - by Gryllida
    I'm looking for a tool which would be able to capture a screen region, pass it to OCR and put the result into clipboard. "import ppm:- | gocr -i - | xclip -selection c" works, but gocr is unreliable: simple text on a webpage has errors. It is a clear font but the OCR tool always misses "r" and replaces it with underscore. "import ppm:- | ocrad -i - | xclip -selection c" says "ocrad: maxval 255 in ppm "P6" file." tesseract needs an image file and does not accept piping input to it. xfce4-screenshooter does not do OCR. ABBYY Screenshot Reader is proprietary. tessnet2 is freeware running on a proprietary platform. Google Docs can OCR screenshots in a batch. But my data is confidential and better not put online. Graphical interface solutions would be acceptable for this question, too. There is a number of existing SuperUser questions about OCR. They fall in several categories. Questions just about OCR without the "screenshot taking" part. Open Source OCR for linux Free OCR for Arabic text Looking for recommendations on OCR problem - tabular numeric data Which has better OCR applications: Ubuntu, or Mac/iPad, or Windows? How can I preform OCR from the command line? OCR solution on linux machine from command line (duplicate) Free OCR software OCR for Sanskrit ( OR devanagari) Copy image and paste to OCR (windows) File processing OCR instead of screenshot. Online OCR website for processing an entire pdf file at one time? Practical OCR solution for converting a large book to a digital format? How to extract text with OCR from a PDF on Linux? Batch-OCR many PDFs OCR Image based PDF Copy image and paste to OCR Extract OCR text from Evernote OCR in Word 2013 Replace (OCR) garbled text in PDF? Process files prior to running OCR. How can I make OCR recognize my documents' text better? Tesseract OCR recognition bilingual document. mistakes tolerance level setup OCR for low quality images How do I get the best quality screenshot for OCR (Optical Character Recognition) and what tool would be the best for screenshots? OCR training. Training Tesseract-OCR for english language fonts None of them answer this question.

    Read the article

  • ubuntu bind9 AppArmor read permission denied (chroot jail)

    - by Richard Whitman
    I am trying to run bind9 with chroot jail. I followed the steps mentioned at : http://www.howtoforge.com/debian_bind9_master_slave_system I am getting the following errors in my syslog: Jul 27 16:53:49 conf002 named[3988]: starting BIND 9.7.3 -u bind -t /var/lib/named Jul 27 16:53:49 conf002 named[3988]: built with '--prefix=/usr' '--mandir=/usr/share/man' '--infodir=/usr/share/info' '--sysconfdir=/etc/bind' '--localstatedir=/var' '--enable-threads' '--enable-largefile' '--with-libtool' '--enable-shared' '--enable-static' '--with-openssl=/usr' '--with-gssapi=/usr' '--with-gnu-ld' '--with-dlz-postgres=no' '--with-dlz-mysql=no' '--with-dlz-bdb=yes' '--with-dlz-filesystem=yes' '--with-dlz-ldap=yes' '--with-dlz-stub=yes' '--with-geoip=/usr' '--enable-ipv6' 'CFLAGS=-fno-strict-aliasing -DDIG_SIGCHASE -O2' 'LDFLAGS=-Wl,-Bsymbolic-functions' 'CPPFLAGS=' Jul 27 16:53:49 conf002 named[3988]: adjusted limit on open files from 4096 to 1048576 Jul 27 16:53:49 conf002 named[3988]: found 4 CPUs, using 4 worker threads Jul 27 16:53:49 conf002 named[3988]: using up to 4096 sockets Jul 27 16:53:49 conf002 named[3988]: loading configuration from '/etc/bind/named.conf' Jul 27 16:53:49 conf002 named[3988]: none:0: open: /etc/bind/named.conf: permission denied Jul 27 16:53:49 conf002 named[3988]: loading configuration: permission denied Jul 27 16:53:49 conf002 named[3988]: exiting (due to fatal error) Jul 27 16:53:49 conf002 kernel: [74323.514875] type=1400 audit(1343433229.352:108): apparmor="DENIED" operation="open" parent=3987 profile="/usr/sbin/named" name="/var/lib/named/etc/bind/named.conf" pid=3992 comm="named" requested_mask="r" denied_mask="r" fsuid=103 ouid=103 Looks like the process can not read the file /var/lib/named/etc/bind/named.conf. I have made sure that the owner of this file is user bind, and it has the read/write access to it: root@test:/var/lib/named/etc/bind# ls -atl total 64 drwxr-xr-x 3 bind bind 4096 2012-07-27 16:35 .. drwxrwsrwx 2 bind bind 4096 2012-07-27 15:26 zones drwxr-sr-x 3 bind bind 4096 2012-07-26 21:36 . -rw-r--r-- 1 bind bind 666 2012-07-26 21:33 named.conf.options -rw-r--r-- 1 bind bind 514 2012-07-26 21:18 named.conf.local -rw-r----- 1 bind bind 77 2012-07-25 00:25 rndc.key -rw-r--r-- 1 bind bind 2544 2011-07-14 06:31 bind.keys -rw-r--r-- 1 bind bind 237 2011-07-14 06:31 db.0 -rw-r--r-- 1 bind bind 271 2011-07-14 06:31 db.127 -rw-r--r-- 1 bind bind 237 2011-07-14 06:31 db.255 -rw-r--r-- 1 bind bind 353 2011-07-14 06:31 db.empty -rw-r--r-- 1 bind bind 270 2011-07-14 06:31 db.local -rw-r--r-- 1 bind bind 2994 2011-07-14 06:31 db.root -rw-r--r-- 1 bind bind 463 2011-07-14 06:31 named.conf -rw-r--r-- 1 bind bind 490 2011-07-14 06:31 named.conf.default-zones -rw-r--r-- 1 bind bind 1317 2011-07-14 06:31 zones.rfc1918 What could be wrong here?

    Read the article

  • Oracle 9i Session Disconnections

    - by mlaverd
    I am in a development environment, and our test Oracle 9i server has been misbehaving for a few days now. What happens is that we have our JDBC connections disconnecting after a few successful connections. We got this box set up by our IT department and handed over to. It is 'our problem', so options like 'ask you DBA' isn't going to help me. :( Our server is set up with 3 plain databases (one is the main dev db, the other is the 'experimental' dev db). We use the Oracle 10 ojdbc14.jar thin JDBC driver (because of some bug in the version 9 of the driver). We're using Hibernate to talk to the DB. The only thing that I can see that changed is that we now have more users connecting to the server. Instead of one developer, we now have 3. With the Hibernate connection pools, I'm thinking that maybe we're hitting some limit? Anyone has any idea what's going on? Here's the stack trace on the client: Caused by: org.hibernate.exception.GenericJDBCException: could not execute query at org.hibernate.exception.SQLStateConverter.handledNonSpecificException(SQLStateConverter.java:126) [hibernate3.jar:na] at org.hibernate.exception.SQLStateConverter.convert(SQLStateConverter.java:114) [hibernate3.jar:na] at org.hibernate.exception.JDBCExceptionHelper.convert(JDBCExceptionHelper.java:66) [hibernate3.jar:na] at org.hibernate.loader.Loader.doList(Loader.java:2235) [hibernate3.jar:na] at org.hibernate.loader.Loader.listIgnoreQueryCache(Loader.java:2129) [hibernate3.jar:na] at org.hibernate.loader.Loader.list(Loader.java:2124) [hibernate3.jar:na] at org.hibernate.loader.hql.QueryLoader.list(QueryLoader.java:401) [hibernate3.jar:na] at org.hibernate.hql.ast.QueryTranslatorImpl.list(QueryTranslatorImpl.java:363) [hibernate3.jar:na] at org.hibernate.engine.query.HQLQueryPlan.performList(HQLQueryPlan.java:196) [hibernate3.jar:na] at org.hibernate.impl.SessionImpl.list(SessionImpl.java:1149) [hibernate3.jar:na] at org.hibernate.impl.QueryImpl.list(QueryImpl.java:102) [hibernate3.jar:na] ... Caused by: java.sql.SQLException: Io exception: Connection reset at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:112) [ojdbc14.jar:Oracle JDBC Driver version - "10.2.0.4.0"] at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:146) [ojdbc14.jar:Oracle JDBC Driver version - "10.2.0.4.0"] at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:255) [ojdbc14.jar:Oracle JDBC Driver version - "10.2.0.4.0"] at oracle.jdbc.driver.T4CPreparedStatement.executeForDescribe(T4CPreparedStatement.java:829) [ojdbc14.jar:Oracle JDBC Driver version - "10.2.0.4.0"] at oracle.jdbc.driver.OracleStatement.executeMaybeDescribe(OracleStatement.java:1049) [ojdbc14.jar:Oracle JDBC Driver version - "10.2.0.4.0"] at oracle.jdbc.driver.T4CPreparedStatement.executeMaybeDescribe(T4CPreparedStatement.java:854) [ojdbc14.jar:Oracle JDBC Driver version - "10.2.0.4.0"] at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1154) [ojdbc14.jar:Oracle JDBC Driver version - "10.2.0.4.0"] at oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePreparedStatement.java:3370) [ojdbc14.jar:Oracle JDBC Driver version - "10.2.0.4.0"] at oracle.jdbc.driver.OraclePreparedStatement.executeQuery(OraclePreparedStatement.java:3415) [ojdbc14.jar:Oracle JDBC Driver version - "10.2.0.4.0"] at org.hibernate.jdbc.AbstractBatcher.getResultSet(AbstractBatcher.java:208) [hibernate3.jar:na] at org.hibernate.loader.Loader.getResultSet(Loader.java:1812) [hibernate3.jar:na] at org.hibernate.loader.Loader.doQuery(Loader.java:697) [hibernate3.jar:na] at org.hibernate.loader.Loader.doQueryAndInitializeNonLazyCollections(Loader.java:259) [hibernate3.jar:na] at org.hibernate.loader.Loader.doList(Loader.java:2232) [hibernate3.jar:na]

    Read the article

  • dovecot imap ssl certificate issues

    - by mulllhausen
    i have been trying to configure my dovecot imap server (version 1.0.10 - upgrading is not an option at this stage) with a new ssl certificate on ubuntu like so: $ grep ^ssl /etc/dovecot/dovecot.conf ssl_disable = no ssl_cert_file = /etc/ssl/certs/mydomain.com.crt.20120904 ssl_key_file = /etc/ssl/private/mydomain.com.key.20120904 $ /etc/init.t/dovecot stop $ sudo dovecot -p $ [i enter the ssl password here] it doesn't show any errors and when i run ps aux | grep dovecot i get root 21368 0.0 0.0 12452 688 ? Ss 15:19 0:00 dovecot -p root 21369 0.0 0.0 71772 2940 ? S 15:19 0:00 dovecot-auth dovecot 21370 0.0 0.0 14140 1904 ? S 15:19 0:00 pop3-login dovecot 21371 0.0 0.0 14140 1900 ? S 15:19 0:00 pop3-login dovecot 21372 0.0 0.0 14140 1904 ? S 15:19 0:00 pop3-login dovecot 21381 0.0 0.0 14280 2140 ? S 15:19 0:00 imap-login dovecot 21497 0.0 0.0 14280 2116 ? S 15:29 0:00 imap-login dovecot 21791 0.0 0.0 14148 1908 ? S 15:48 0:00 imap-login dovecot 21835 0.0 0.0 14148 1908 ? S 15:53 0:00 imap-login dovecot 21931 0.0 0.0 14148 1904 ? S 16:00 0:00 imap-login me 21953 0.0 0.0 5168 944 pts/0 S+ 16:02 0:00 grep --color=auto dovecot which looks like it is all running fine. so then i test to see if i can telnet to the dovecot server, and this works fine: $ telnet localhost 143 Trying 127.0.0.1... Connected to localhost. Escape character is '^]'. * OK Dovecot ready. but when i test whether dovecot has configured the ssl certificates properly, it appears to fail: $ sudo openssl s_client -connect localhost:143 -starttls imap CONNECTED(00000003) depth=0 /description=xxxxxxxxxxxxxxxxx/C=AU/ST=xxxxxxxx/L=xxxx/O=xxxxxx/CN=*.mydomain.com/[email protected] verify error:num=20:unable to get local issuer certificate verify return:1 depth=0 /description=xxxxxxxxxxx/C=AU/ST=xxxxxx/L=xxxx/O=xxxx/CN=*.mydomain.com/[email protected] verify error:num=27:certificate not trusted verify return:1 depth=0 /description=xxxxxxxx/C=AU/ST=xxxxxxxxxx/L=xxxx/O=xxxxx/CN=*.mydomain.com/[email protected] verify error:num=21:unable to verify the first certificate verify return:1 --- Certificate chain 0 s:/description=xxxxxxxxxxxx/C=AU/ST=xxxxxxxxxx/L=xxxxxxxx/O=xxxxxxx/CN=*.mydomain.com/[email protected] i:/C=IL/O=StartCom Ltd./OU=Secure Digital Certificate Signing/CN=StartCom Class 2 Primary Intermediate Server CA --- Server certificate -----BEGIN CERTIFICATE----- xxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxx . . . xxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxx== -----END CERTIFICATE----- subject=/description=xxxxxxxxxx/C=AU/ST=xxxxxxxxx/L=xxxxxxx/O=xxxxxx/CN=*.mydomain.com/[email protected] issuer=/C=IL/O=StartCom Ltd./OU=Secure Digital Certificate Signing/CN=StartCom Class 2 Primary Intermediate Server CA --- No client certificate CA names sent --- SSL handshake has read 2831 bytes and written 342 bytes --- New, TLSv1/SSLv3, Cipher is DHE-RSA-AES256-SHA Server public key is 2048 bit Compression: NONE Expansion: NONE SSL-Session: Protocol : TLSv1 Cipher : DHE-RSA-AES256-SHA Session-ID: xxxxxxxxxxxxxxxxxxxx Session-ID-ctx: Master-Key: xxxxxxxxxxxxxxxxxx Key-Arg : None Start Time: 1351661960 Timeout : 300 (sec) Verify return code: 21 (unable to verify the first certificate) --- . OK Capability completed. at least, i'm assuming this is a failure???

    Read the article

  • XenServer 6.0.2 path to installation media contains non-ascii characters

    - by cmaduro
    XenServer 6.0.2 install fails no matter what I do. I have confirmed that the md5 checksum on my ISO file is good. I tried installing from a mounted ISO file (remotely via iKVM). I tried installing from physical media. I tried installing from a bootable USB stick (using syslinux + contents of the ISO) All attempts have yielded the same result: When verifying the installation media, at 0% initializing, the following is reported: "Some packages appeared to be damaged." followed by a list of pretty much all the gz2 and rpm packages. If I skip the media verification the installer proceeds and then gives me an error when it reaches "Installing from base pack" at 0% which states "An unrecoverable error has occurred. The error was: 'ascii' codec can't decode byte 0xff in position 20710: ordinal not in range(128) Please refer to your user guide, or contact a Technical Support Representative, for further details" there is one option left which is to reboot. Apparently at some point during the processing of the repositories on the installation media non-ascii characters are found, which causes the installer to quit. How do I fix this? Here are my specs TYAN S8236 motherboard 2 AMD Opteron 6234 processors LSI2008 card connected to 2 1TB Seagate Constellation drives SATA, 1 500GB Corsair m4 SSD SATA and 1 Corsair Forse 3 - 64GB SSD SATA Onboard SATA connected to a slim DVD-+RW. Onboard SAS connected to 2 IBM ESX 70GB 10K SAS drives (for XenServer) 256GB memory ================================================================================= Comments: According to pylonsbook.com "chances are you have run into a problem with character sets, encodings, and Unicode" – cmaduro 10 hours ago A clue is provided by "vmware.com/support/vsphere5/doc/…; Data migration fails if the path to the vCenter Server installation media contains non-ASCII characters When this problem occurs, an error message similar to: 'ascii' codec can't decode byte 0xd0 in position 30: ordinal not in range(128) appears, and the installer quits unexpectedly during the data migration process. – cmaduro 10 hours ago This is an error that python throws. And guess what, the .py extention of the file you have to edit in this link community.spiceworks.com/how_to/show/1168 means the installer is written in python. Python is interpreted, so now to find the install file responsible for this error. – cmaduro 6 hours ago The file that generates the error upon verification is /opt/xensource/installer/tui/repo.py. The error message appears around line 359. – cmaduro 2 hours ago I am fairly sure that the install error is generated somewhere in repository.py as the backend.py file throws errors while methods in that file are being called. Perhaps all errors can be traced back to this file. – cmaduro 1 hour ago

    Read the article

  • Squid 2.7.STABLE3-4.1 as a transparent proxy on Ubuntu Server 9.04

    - by E3 Group
    Can't get this to work at all! I'm trying to get this linux box to act as a transparent proxy and, with the help of DHCP, force everyone on the network to gate into the proxy. I have two ethernet connections, both to the same switch. And I'm trying to get 192.168.1.234 to become the default gateway. The actual WAN connection is to a gateway 192.168.1.1. eth0 is 192.168.1.234 eth1 is 192.168.1.2 Effectively I'm trying to make eth0 a LAN only interface and eth1 a WAN interface. I've oi should set the gateway for eth1 to point to 192.168.1.234 my squid.conf file has the following directives added at the bottom: nly set eth0 to have a gateway address in /etc/network/interfaces I'm not sure whether http_port 3128 transparent acl lan src 192.168.1.0/24 acl lh src 127.0.0.1/255.255.255.0 http_access allow lan http_access allow lh i've added the following routing commands: iptables -t nat -A PREROUTING -i eth0 -p tcp -m tcp --dport 80 -j DNAT --to-destination 192.168.1.2:3128 iptables -t nat -A PREROUTING -i eth1 -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 3128 I set a computer with TCP settings 192.168.1.234 as the gateway and opened up google.com, but it comes up with a request error. Any ideas why this isn't working? :( Been searching continuously for a solution to no avail. ----------------------------- EDIT ------------------------------- Managed to get it to route properly to the squid, here's the error I get in the browser: ERROR The requested URL could not be retrieved While trying to process the request: GET / HTTP/1.1 Host: www.google.com User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-GB; rv:1.9.1.2) Gecko/20090729 Firefox/3.5.2 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-gb,en;q=0.5 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 300 Connection: keep-alive Cache-Control: max-age=0 The following error was encountered: * Invalid Request Some aspect of the HTTP Request is invalid. Possible problems: * Missing or unknown request method * Missing URL * Missing HTTP Identifier (HTTP/1.0) * Request is too large * Content-Length missing for POST or PUT requests * Illegal character in hostname; underscores are not allowed Your cache administrator is webmaster. Generated Mon, 26 Oct 2009 03:41:15 GMT by mjolnir.lloydharrington.local (squid/2.7.STABLE3)

    Read the article

  • IRP_MJ_WRITE latency up to 15 seconds

    - by racitup
    We have written an application that performs small (22kB) writes to multiple files at once (one thread performing asynchronous queued writes to multiple locations on behalf of other threads) on the same local volume (RAID1). 99.9% of the writes are low-latency but occasionally (maybe every minute or two) we get one or two huge latency writes (I have seen 10 seconds and above) without any real explanation. Platform: Win2003 Server with NTFS. Monitoring: Sysinternals Process Monitor (see link below) and our own application logging. We have tried multiple things to try and solve this that have been gleaned from a few websites, e.g.: Making the first part of file names unique to aid 8.3 name generation Writing files to multiple directories Changing Intel Disk Write Caching Windows File/Printer Sharing Minimize memory used Balance Maximize data throughput for file sharing Maximize data throughput for network applications System-Advanced-Performance-Advanced NtfsDisableLastAccessUpdate - use fsutil behavior set disablelastaccess 1 disable 8.3 name generation - use "fsutil behavior set disable8dot3 1" + restart Enable a large size file system cache Disable paging of the kernel code IO Page Lock Limit Turn Off (or On) the Indexing Service But nothing seems to make much difference. There's a whole host of things we haven't tried yet but we wondered if anyone had come across the same problem, a reason and a solution (programmatic or not)? We can reproduce the problem using IOMeter and a simple setup: Start IOMeter and remove all but the first worker thread in 'Topology' using the disconnect button. Select the Worker thread and put a cross in the box next to the disk you want to use in the Disk Targets tab and put '2000000' in Maximum Disk Size (NOTE: must have at least 1GB free space; sector size is 512 bytes) Next create a new access specification and add it to the worker thread: Transfer Request Size = 22kB 100% Sequential Percent of Access Spec = 100% Percent Read/Write = 100% Write Change Results Display Update Frequency to 5 seconds, Test Setup Run Time to 20 seconds and both 'Number of Workers to Spawn Automatically' settings to zero. Select the Worker Thread in the Topology panel and hit the Duplicate Worker button 59 times to create 60 threads with identical settings. Hit the 'Go' button (green flag) and monitor the Results tab. The 'Maximum I/O Response Time (ms)' always hits at least 3500 on our machine. Our machine isn't exactly slow (Xeon 8 core rack server with 4GB and onboard RAID). I'd be interested to see what other people get. We have a feeling it might be something to do with the NTFS filesystem (ours is currently 75% full of fragmented files) and we are going to try a few things around this principle. But it is also related to disk performance since we don't see it on a RAMDisk and it's not as severe on a RAID10 array. Any help is much appreciated. Richard Right-click and select 'Open Link in New Tab': ProcMon Result

    Read the article

  • Veewee, Vagrant, Puppet, Erlang and RabbitMQ

    - by Tobias
    I am kinda stuck with a problem I am trying to wrap my head around for days now. Here is what I am doing: By using Veewee, I am creating a VirtualBox image and then I create a Vagrant box from it. See here, here Finally I run puppet from Vagrant to install RabbitMQ, see here. Veewee, Vagrant and VirtualBox all run on MacOS X 10.7.4. The vagrant box itself is CentOS 6.2. This worked fine for quite some time until I was recreating the VirtualBox image a couple of days ago. During installation of the rabbitmq-plugins during my puppet run I now get the following error: /Stage[main]/Rabbitmq/Exec[rabbitmq-plugins]/returns: erlexec: HOME must be set My RabbitMQ puppet configuration can be found on my GitHub repo for that project, but here is the most important part: $version = "2.8.7" $url = "http://www.rabbitmq.com/releases/rabbitmq-server/v${version}/rabbitmq-server-${version}-1.noarch.rpm" package{"erlang": ensure => "present", } package{"rabbitmq-server": provider => "rpm", source => $url, require => Package["erlang"] } exec{"rabbitmq-plugins": path => "/usr/bin:/usr/sbin:/bin", command => "rabbitmq-plugins enable rabbitmq_management", require => Package["rabbitmq-server"] } My additional repositories, e.g. epel, are defined in veewees postinstall.sh right at the top of the file. Finally, this is what I get when I do '/etc/init.d/rabbitmq-server status' [{pid,2834}, {running_applications,[{rabbit,"RabbitMQ","2.8.7"}, {ssl,"Erlang/OTP SSL application","4.1.6"}, {public_key,"Public key infrastructure","0.13"}, {crypto,"CRYPTO version 2","2.0.4"}, {mnesia,"MNESIA CXC 138 12","4.5"}, {os_mon,"CPO CXC 138 46","2.2.7"}, {sasl,"SASL CXC 138 11","2.1.10"}, {stdlib,"ERTS CXC 138 10","1.17.5"}, {kernel,"ERTS CXC 138 10","2.14.5"}]}, {os,{unix,linux}}, {erlang_version,"Erlang R14B04 (erts-5.8.5) [source] [64-bit] [rq:1] [async-threads:30] [kernel-poll:true]\n"}, {memory,[{total,24993120}, {processes,10328496}, {processes_used,10321296}, {system,14664624}, {atom,1175905}, {atom_used,1143841}, {binary,17192}, {code,11416020}, {ets,766168}]}, {vm_memory_high_watermark,0.4}, {vm_memory_limit,205851852}, {disk_free_limit,1000000000}, {disk_free,7089795072}, {file_descriptors,[{total_limit,924}, {total_used,4}, {sockets_limit,829}, {sockets_used,2}]}, {processes,[{limit,1048576},{used,131}]}, {run_queue,0}, {uptime,6}] Sources in the web suggest, that I have to set HOME. Of course I was logging into the box if HOME was set, for user vagrant it was '/home/vagrant' and for root it was 'root'. As always, any hints/ideas/suggestions/assumptions are more than welcome. Thanks a lot! Cheers, Tobi

    Read the article

  • Nginx and PHP-FPM running out of connections

    - by E3pO
    I keep running into errors like these, [02-Jun-2012 01:52:04] WARNING: [pool www] seems busy (you may need to increase pm.start_servers, or pm.min/max_spare_servers), spawning 8 children, there are 19 idle, and 49 total children [02-Jun-2012 01:52:05] WARNING: [pool www] seems busy (you may need to increase pm.start_servers, or pm.min/max_spare_servers), spawning 16 children, there are 19 idle, and 50 total children [02-Jun-2012 01:52:06] WARNING: [pool www] seems busy (you may need to increase pm.start_servers, or pm.min/max_spare_servers), spawning 32 children, there are 19 idle, and 51 total children [02-Jun-2012 03:10:51] WARNING: [pool www] seems busy (you may need to increase pm.start_servers, or pm.min/max_spare_servers), spawning 8 children, there are 18 idle, and 91 total children I changed my settings for php-fpm to these, pm.max_children = 150 (It was at 100, i got a max_children reached and upped to 150) pm.start_servers = 75 pm.min_spare_servers = 20 pm.max_spare_servers = 150 Resulting in [02-Jun-2012 01:39:19] WARNING: [pool www] server reached pm.max_children setting (150), consider raising it I've just launched a new website that is getting a conciderable amount of traffic on it. This traffic is legitimate and users are getting 504 gateway timeouts when the limit is reached. I have limited connections to my server with IPTABLES and I'm running fail2ban and keeping track of nginx access logs. The traffic is all legitimate, i'm just running out of room for users. I'm currently running on a dual core box with ubuntu 64bit. free total used free shared buffers cached Mem: 6114284 5726984 387300 0 141612 4985384 -/+ buffers/cache: 599988 5514296 Swap: 524284 5804 518480 My php.ini max_input_time = 60 My nginx config is worker_processes 4; pid /var/run/nginx.pid; events { worker_connections 19000; # multi_accept on; } worker_rlimit_nofile 20000; #each connection needs a filehandle (or 2 if you are proxying) client_max_body_size 30M; client_body_timeout 10; client_header_timeout 10; keepalive_timeout 5 5; send_timeout 10; location ~ \.php$ { try_files $uri /er/error.php; fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_connect_timeout 60; fastcgi_send_timeout 180; fastcgi_read_timeout 180; fastcgi_buffer_size 128k; fastcgi_buffers 256 16k; fastcgi_busy_buffers_size 256k; fastcgi_temp_file_write_size 256k; fastcgi_max_temp_file_size 0; fastcgi_intercept_errors on; fastcgi_pass unix:/tmp/php5-fpm.sock; fastcgi_index index.php; include fastcgi_params; } What can I do to stop running out of connections? Why does this keep occurring? I'm monitoring my traffic on Google Analytics realtime and when the user count gets above about 120 my php-fpm.log is full of these warnings..

    Read the article

  • Tuning Linux IP routing parameters -- secret_interval and tcp_mem

    - by Jeff Atwood
    We had a little failover problem with one of our HAProxy VMs today. When we dug into it, we found this: Jan 26 07:41:45 haproxy2 kernel: [226818.070059] __ratelimit: 10 callbacks suppressed Jan 26 07:41:45 haproxy2 kernel: [226818.070064] Out of socket memory Jan 26 07:41:47 haproxy2 kernel: [226819.560048] Out of socket memory Jan 26 07:41:49 haproxy2 kernel: [226822.030044] Out of socket memory Which, per this link, apparently has to do with low default settings for net.ipv4.tcp_mem. So we increased them by 4x from their defaults (this is Ubuntu Server, not sure if the Linux flavor matters): current values are: 45984 61312 91968 new values are: 183936 245248 367872 After that, we started seeing a bizarre error message: Jan 26 08:18:49 haproxy1 kernel: [ 2291.579726] Route hash chain too long! Jan 26 08:18:49 haproxy1 kernel: [ 2291.579732] Adjust your secret_interval! Shh.. it's a secret!! This apparently has to do with /proc/sys/net/ipv4/route/secret_interval which defaults to 600 and controls periodic flushing of the route cache The secret_interval instructs the kernel how often to blow away ALL route hash entries regardless of how new/old they are. In our environment this is generally bad. The CPU will be busy rebuilding thousands of entries per second every time the cache is cleared. However we set this to run once a day to keep memory leaks at bay (though we've never had one). While we are happy to reduce this, it seems odd to recommend dropping the entire route cache at regular intervals, rather than simply pushing old values out of the route cache faster. After some investigation, we found /proc/sys/net/ipv4/route/gc_elasticity which seems to be a better option for keeping the route table size in check: gc_elasticity can best be described as the average bucket depth the kernel will accept before it starts expiring route hash entries. This will help maintain the upper limit of active routes. We adjusted elasticity from 8 to 4, in the hopes of the route cache pruning itself more aggressively. The secret_interval does not feel correct to us. But there are a bunch of settings and it's unclear which are really the right way to go here. /proc/sys/net/ipv4/route/gc_elasticity (8) /proc/sys/net/ipv4/route/gc_interval (60) /proc/sys/net/ipv4/route/gc_min_interval (0) /proc/sys/net/ipv4/route/gc_timeout (300) /proc/sys/net/ipv4/route/secret_interval (600) /proc/sys/net/ipv4/route/gc_thresh (?) rhash_entries (kernel parameter, default unknown?) We don't want to make the Linux routing worse, so we're kind of afraid to mess with some of these settings. Can anyone advise which routing parameters are best to tune, for a high traffic HAProxy instance?

    Read the article

  • MySQL daemon keeps terminating unexpectedly

    - by Yehia A.Salam
    The MySQL daemon on my CentOS server keeps crashing, i got the logs from /var/logs/mysqld but still i am not sure how to fix this: 121114 16:22:56 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended 121114 21:55:11 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql 121114 21:55:11 [Note] Plugin 'FEDERATED' is disabled. 121114 21:55:11 InnoDB: The InnoDB memory heap is disabled 121114 21:55:11 InnoDB: Mutexes and rw_locks use GCC atomic builtins 121114 21:55:11 InnoDB: Compressed tables use zlib 1.2.3 121114 21:55:11 InnoDB: Using Linux native AIO 121114 21:55:11 InnoDB: Initializing buffer pool, size = 128.0M 121114 21:55:11 InnoDB: Completed initialization of buffer pool 121114 21:55:11 InnoDB: highest supported file format is Barracuda. InnoDB: The log sequence number in ibdata files does not match InnoDB: the log sequence number in the ib_logfiles! 121114 21:55:11 InnoDB: Database was not shut down normally! InnoDB: Starting crash recovery. InnoDB: Reading tablespace information from the .ibd files... InnoDB: Restoring possible half-written data pages from the doublewrite InnoDB: buffer... 121114 21:55:12 InnoDB: Waiting for the background threads to start 121114 21:55:13 InnoDB: 1.1.6 started; log sequence number 77177262 121114 21:55:13 [Note] Event Scheduler: Loaded 0 events 121114 21:55:13 [Note] /usr/libexec/mysqld: ready for connections. Version: '5.5.12' socket: '/var/lib/mysql/mysql.sock' port: 3306 MySQL Community Server (GPL) by Remi 121115 00:19:44 mysqld_safe Number of processes running now: 0 121115 00:19:44 mysqld_safe mysqld restarted 121115 0:19:47 [Note] Plugin 'FEDERATED' is disabled. 121115 0:19:47 InnoDB: The InnoDB memory heap is disabled 121115 0:19:47 InnoDB: Mutexes and rw_locks use GCC atomic builtins 121115 0:19:47 InnoDB: Compressed tables use zlib 1.2.3 121115 0:19:47 InnoDB: Using Linux native AIO 121115 0:19:47 InnoDB: Initializing buffer pool, size = 128.0M InnoDB: mmap(137363456 bytes) failed; errno 12 121115 0:19:47 InnoDB: Completed initialization of buffer pool 121115 0:19:47 InnoDB: Fatal error: cannot allocate memory for the buffer pool 121115 0:19:47 [ERROR] Plugin 'InnoDB' init function returned error. 121115 0:19:47 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed. 121115 0:19:47 [ERROR] Unknown/unsupported storage engine: InnoDB 121115 0:19:47 [ERROR] Aborting Edit #1 total used free shared buffers cached Mem: 496 370 126 0 24 110 -/+ buffers/cache: 234 261 Swap: 1023 9 1014 Edit #2 Also, largest table in my mysql is 20MB, so my the memory used should be pretty moderate. SELECT CONCAT(table_schema, '.', table_name), CONCAT(ROUND(table_rows / 1000000, 2), 'M') rows, CONCAT(ROUND(data_length / ( 1024 * 1024 * 1024 ), 2), 'G') DATA, CONCAT(ROUND(index_length / ( 1024 * 1024 * 1024 ), 2), 'G') idx, CONCAT(ROUND(( data_length + index_length ) / ( 1024 * 1024 * 1024 ), 2), 'G') total_size, ROUND(index_length / data_length, 2) idxfrac FROM information_schema.TABLES ORDER BY data_length + index_length DESC LIMIT 10;

    Read the article

  • DNSBL listed at zen.spamhaus.org - cant get outgoing mail working? Am I interpreting the response correctly?

    - by Joe Hopfgartner
    I have problem with a mailserver and there is something I kind of not understand! I can connect, authenticate, specify the sender address - but when specifying the reciever i get a error 550 which looks like so: RCPT TO:[email protected] 550-DNSBL listed at zen.spamhaus.org 550 http://www.spamhaus.org/query/bl?ip=62.178.15.161 Now the strange thing is that 62.178.15.161 is my local client address. Not the servers ip address. Also the error code 550 seems to be defined as so: 550 Requested action not taken: mailbox unavailable To me that makes totally no sense. Why this error code with this spamhaus message? Why the local ip adress and not the servers? There is exim running and there is nothing turning up in the logs mail.err mail.info mail.log mail.warn in /var/log I looked up both the servers and the clients ip adress on blacklists. The clients ip adress is listed on some (as expected), but the server is totally clean. Here is the complete telnet log when I reproduced the error. Mail clients like Evolution and Thunderbird give me the same spamhaus error message. joe@joe-desktop:~$ telnet mail.hunsynth.org 25 Trying 193.164.132.42... Connected to mail.hunsynth.org. Escape character is '^]'. 220 hunsynth.org ESMTP Exim 4.69 Sat, 01 Jan 2011 17:52:45 +0100 HELP 214-Commands supported: 214 AUTH STARTTLS HELO EHLO MAIL RCPT DATA NOOP QUIT RSET HELP EHLO AUTH 250-hunsynth.org Hello chello062178015161.6.11.univie.teleweb.at [62.178.15.161] 250-SIZE 52428800 250-PIPELINING 250-AUTH PLAIN LOGIN CRAM-MD5 250-STARTTLS 250 HELP AUTH LOGIN 334 VXNlcm5hbWU6 dGVzdEBodW5zeW50aC5vcmc= 334 UGFzc3dvcmQ6 ***** 235 Authentication succeeded MAIL FROM:[email protected] 250 OK RCPT TO:[email protected] 550-DNSBL listed at zen.spamhaus.org 550 http://www.spamhaus.org/query/bl?ip=62.178.15.161 quit 221 hunsynth.org closing connection Connection closed by foreign host. joe@joe-desktop:~$ Update: I tried the same thing from my other server and could successfully send an email. So it really looks like the server does check the IP wich establiches the connection is in some blacklist. This is theoretically a good thing - but - the authentication on the server should prevent that? Or shouldn't it? Well I just think it would be absurd if I couldn't send email over my smtp server from my dynamic ISP connection because the dynamic is listed, altough i have a clean server with login?

    Read the article

  • Windows 7 boot problem (with colorful blinking smilies)

    - by Ishmael Smyrnow
    I put my computer (Windows 7) to sleep, and a couple hours later, tried to wake it back up, but the monitor wouldn't come back on. I did a hard reset (held power button), but I still couldn't get the monitor to show anything. I plugged it into my laptop, and the monitor works fine. I then swapped out the video card with an older one I have. The monitor came on and started showing the boot process. However, shortly after the Windows 7 animated logo came up, the screen went blank, it made this weird beeping noise, and I seen the strangest thing ever. Small, colorful blocks started to fill my screen, and flash, as if something was loading. Inside of those blocks, were smilies (like the ASCII character kind). This continued for about a minute, then the computer rebooted. It scared the sh!t out of me. I've never had a virus before, and I'm savvy enough to keep myself from one, but I'm wondering if that's what it was. I've been using computers for ages, and never seen anything quite like this. Has anyone ever seen something like this? I'm doing hardware diagnostics before trying to boot into Windows again. Hopefully I can figure this out, but I thought I would consult the SU community while I wait on these results. -- UPDATE -- I did a Memory Diagnostic, which turned up nothing. I also booted into Safe Mode no problem, and scheduled a disk check on both of my drives (I dual boot XP & 7). I was feeling good, and tried putting my regular video card back in, and the monitor won't display anything with it. Also, even though the monitor displays nothing, the system sounds like it's booting up. However, I hear a clicking in one of my hard drives that isn't there with the older video card. Could this be a problem with my hard drive, video card, or PSU? PSU makes sense, except for the fact I've been using the same setup for over a year, and the video card doesn't require it's own power plug thing.

    Read the article

  • How to set 2 conditions / criterias for VLOOKUP / LOOKUP / etc in OpenOffice Calc (or Excel)

    - by MestreLion
    I have this spreadsheet that started as a silly aid for a game (Mafia Wars 2), but grew into a tricky spreadsheet question. In the game your character have 9 "slots" for weapons and armors, 1 for each "type": Light Weapon, Heavy Weapon, Body Armor, Head Armor, etc. So I made a list of all weapons and armors available in the game, 1 item per row. Example: SHOP ITEM TYPE ITEM NAME ATK DEF PRICE EQUIPPED? Marketplace Weapon Light Konrad Knife 16 5 5.500 Marketplace Weapon Light Ice Queen 19 6 8.200 Marketplace Armor Body Up Layered Polym 0 31 8.600 Marketplace Armor Body Up Full Shield 7 42 17.650 Marketplace Weapon Heavy Konrad Bullpup 53 25 24.500 Marketplace Weapon Heavy Full Moon Blow 73 12 24.500 x Marketplace Armor Body Low Knee Pads 17 26 14.200 x Marketplace Armor Body Low Army Boots 15 55 24.500 Bone Yard Weapon Light Bone Launcher 41 2 9.400 x Neon Strip Vehicle Ground Supercharged 41 34 24.500 Dead End Weapon Heavy Sharp Sickle 21 5 24.500 Dead End Armor Body Low Unholy Boots 5 36 15.000 Dead End Armor Head Hockey Mask 5 18 15.900 x Last columns is an indication of the items i have already bought and equipped (marked with "x"). What I need is a formula that, for each "slot" (item type), returns info related to the item of that kind that I am using. That would be: ITEM TYPE SHOP NAME ITEM NAME ATK DEF PRICE Weapon Light Bone Yard Bone Launcher 41 2 9.400 Weapon Heavy Marketplace Full Moon Blow 73 12 24.500 Weapon Special -- -- -- -- -- Armor Body Up -- -- -- -- -- Armor Body Low Marketplace Knee Pads 17 26 14.200 Armor Head Dead End Hockey Mask 5 18 15.900 Vehicle Ground -- -- -- -- -- Vehicle Water -- -- -- -- -- Vehicle Air -- -- -- -- -- The item types are fixed, so they can be hard coded. Each row for an item type. So, for 1st result line, it would return data from the row where both 2nd column is "Weapon Light" and last column is "x". Basically I need a LOOKUP (or VLOOKUP, or anything else) that uses 2 criteria to find a given row, the item type and the X marker. Question is: HOW? I am using OpenOffice Calc 3.2.1, but since it shares so many functions with MS Excel, answers for Excel are also fine (as long as it only uses regular formulas, no VBScript or Macros or VBA etc) Last but not least, suggestions / solutions for rearranging the data so it makes this problem easier to solve are also welcome. Thanks!

    Read the article

  • DPM 2010 "Disk failed or disk not found"

    - by SysAdmin
    I have an HP Proliant ML110 G5 server with Windows server 2008R2 only dedicated for DPM 2010. This server has a limit in HD of 8TB which has already been met. I'm now stuck in this situation where my disk keeps failing "Disk failed or Disk not found" in the disk management. Only after I reboot the system the disk comes back up. Today I was running my monthly tape backup on a certain protection group and the disk failed again while the tape job was running (so the job wasn't completed). This is the description of the error in the alerts: "The disk Disk 1 - Hitachi HDS722020ALA330 SCSI Disk Device cannot be detected or has stopped responding. All subsequent protection activities that use this disk will fail until the disk is brought back online. (ID 3120)". My backup system is becoming useless! I don't think that is a hardware issue (please correct me if I'm wrong) since the HD works fine for a certain period of time which is becoming shorter and shorter. I basically have no more option to fix this problem. I tried to fix any error that was coming up in the event viewer with no luck (included one regarding the SQL2008 compatibility issue). The disk keeps failing! Now I'm only trying to recover/migrate the data from the disk that is having problem but my issue now is that I cannot add any drives to my server since I already got installed the maximum storage capacity 8TB. I thought about 2 simple options. Please tell me what you guys think about it; Unplug one of the 2 storage pool disks (disk0, that one without problem) from the machine and install a new one in order to migrate the data with the Migration tool for DPM. Remove the defective disk (disk1), put back the disk0 and run the synchronization/consistency check on all the groups to recreate replicas and recovery points. Run diskpart.exe and clean up the disk (loosing all data) and hoping that he will work after I sync all the protection groups. Both solutions are not elegant but I have no better options at the moment. Please I need some help. Thanks for your time Angelo

    Read the article

  • Squid 2.7.STABLE3-4.1 as a transparent proxy on Ubuntu Server 9.04

    - by LOGIC9
    Can't get this to work at all! I'm trying to get this linux box to act as a transparent proxy and, with the help of DHCP, force everyone on the network to gate into the proxy. I have two ethernet connections, both to the same switch. And I'm trying to get 192.168.1.234 to become the default gateway. The actual WAN connection is to a gateway 192.168.1.1. eth0 is 192.168.1.234 eth1 is 192.168.1.2 Effectively I'm trying to make eth0 a LAN only interface and eth1 a WAN interface. I've oi should set the gateway for eth1 to point to 192.168.1.234 my squid.conf file has the following directives added at the bottom: nly set eth0 to have a gateway address in /etc/network/interfaces I'm not sure whether http_port 3128 transparent acl lan src 192.168.1.0/24 acl lh src 127.0.0.1/255.255.255.0 http_access allow lan http_access allow lh i've added the following routing commands: iptables -t nat -A PREROUTING -i eth0 -p tcp -m tcp --dport 80 -j DNAT --to-destination 192.168.1.2:3128 iptables -t nat -A PREROUTING -i eth1 -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 3128 I set a computer with TCP settings 192.168.1.234 as the gateway and opened up google.com, but it comes up with a request error. Any ideas why this isn't working? :( Been searching continuously for a solution to no avail. ----------------------------- EDIT ------------------------------- Managed to get it to route properly to the squid, here's the error I get in the browser: ERROR The requested URL could not be retrieved While trying to process the request: GET / HTTP/1.1 Host: www.google.com User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-GB; rv:1.9.1.2) Gecko/20090729 Firefox/3.5.2 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-gb,en;q=0.5 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 300 Connection: keep-alive Cache-Control: max-age=0 The following error was encountered: * Invalid Request Some aspect of the HTTP Request is invalid. Possible problems: * Missing or unknown request method * Missing URL * Missing HTTP Identifier (HTTP/1.0) * Request is too large * Content-Length missing for POST or PUT requests * Illegal character in hostname; underscores are not allowed Your cache administrator is webmaster. Generated Mon, 26 Oct 2009 03:41:15 GMT by mjolnir.lloydharrington.local (squid/2.7.STABLE3)

    Read the article

  • Django + gunicorn + virtualenv + Supervisord issue

    - by Florian Le Goff
    Dear all, I have a strange issue with my virtualenv + gunicorn setup, only when gunicorn is launched via supervisord. I do realize that it may very well be an issue with my supervisord and I would appreciate any feedback on a better place to ask for help... In a nutshell : when I run gunicorn from my user shell, inside my virtualenv, everything is working flawlessly. I'm able to access all the views of my Django project. When gunicorn is launched by supervisord at the system startup, everything is OK. But, if I have to kill the gunicorn_django processes, or if I perform a supervisord restart, once that gunicorn_django has relaunched, every request is answered with a weird Traceback : (...) File "/home/hc/prod/venv/lib/python2.6/site-packages/Django-1.2.5-py2.6.egg/django/db/__init__.py", line 77, in connection = connections[DEFAULT_DB_ALIAS] File "/home/hc/prod/venv/lib/python2.6/site-packages/Django-1.2.5-py2.6.egg/django/db/utils.py", line 92, in __getitem__ backend = load_backend(db['ENGINE']) File "/home/hc/prod/venv/lib/python2.6/site-packages/Django-1.2.5-py2.6.egg/django/db/utils.py", line 50, in load_backend raise ImproperlyConfigured(error_msg) TemplateSyntaxError: Caught ImproperlyConfigured while rendering: 'django.db.backends.postgresql_psycopg2' isn't an available database backend. Try using django.db.backends.XXX, where XXX is one of: 'dummy', 'mysql', 'oracle', 'postgresql', 'postgresql_psycopg2', 'sqlite3' Error was: cannot import name utils Full stack available here : http://pastebin.com/BJ5tNQ2N I'm running... Ubuntu/maverick (up-to-date) Python = 2.6.6 virtualenv = 1.5.1 gunicorn = 0.12.0 Django = 1.2.5 psycopg2 = '2.4-beta2 (dt dec pq3 ext)' gunicorn configuration : backlog = 2048 bind = "127.0.0.1:8000" pidfile = "/tmp/gunicorn-hc.pid" daemon = True debug = True workers = 3 logfile = "/home/hc/prod/log/gunicorn.log" loglevel = "info" supervisord configuration : [program:gunicorn] directory=/home/hc/prod/hc command=/home/hc/prod/venv/bin/gunicorn_django -c /home/hc/prod/hc/gunicorn.conf.py user=hc umask=022 autostart=True autorestart=True redirect_stderr=True Any advice ? I've been stuck on this one for quite a while. It seems like some weird memory limit, as I'm not enforcing anything special : $ ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 20 file size (blocks, -f) unlimited pending signals (-i) 16382 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) unlimited virtual memory (kbytes, -v) unlimited file locks (-x) unlimited Thank you.

    Read the article

  • Using Amazon S3 for multiple remote data site uploads, securely

    - by Aitch
    I've been playing about with Amazon S3 a little for the first time and like what I see for various reasons relating to my potential use case. We have multiple (online) remote server boxes harvesting sensor data that is regularly uploaded every hour or so (rsync'ed) to a VPS server. The number of remote server boxes is growing regularly and forecast to keep growing (hundreds). The servers are geographically dispersed. The servers are also automatically built, therefore generic with standard tools and not bespoke per location. The data is many hundreds of files per day. I want to avoid a situation where I need to provision more VPS storage, or additional servers every time we hit the VPS capacity limit, after every N server deployments, whatever N might be. The remote servers can never be considered fully secure due to us not knowing what might happen to them when we are not looking. Our current solution is a bit naive and simply restricts inbound rsync only over ssh to known mac address directories and a known public key. There are plenty of holes to pick in this, I know. Let's say I write or use a script like s3cmd/s3sync to potentially push up the files. Would I need to manage hundreds of access keys and have each server customized to include this (do-able, but key management becomes nightmarish?) Could I restrict inbound connections somehow (eg by mac address), or just allow write-only to any client that was running the script? ( i could deal with a flood of data if someone got into a system? ) having a bucket per remote machine does not seem feasible due to bucket limits? I don't think I want to use a single common key as if one machine is breached then potentially, a malicious hack could get access to the filestore key and start deleting for ll clients, correct? I hope my inexperience has not blinded me to some other solution that might be suggested! I've read lots of examples of people using S3 for backup, but can't really find anything about this sort of data collection, unless my google terminology is wrong... I've written more than I should here, perhaps it can be summarised thus: In a perfect world I just want to have one of our techs install a new remote server into a location and it automagically starts sending files home with little or no intervention, and minimises risk? Pipedream or feasible? TIA, Aitch

    Read the article

< Previous Page | 347 348 349 350 351 352 353 354 355 356 357 358  | Next Page >