Search Results

Search found 3754 results on 151 pages for 'vertex buffer'.

Page 146/151 | < Previous Page | 142 143 144 145 146 147 148 149 150 151  | Next Page >

  • MySQL is hogging my server resources

    - by Reacen
    Does anyone have any idea of what can cause this weird behaviour and how I go about fixing it? This is all coming from MySQL only (both RAM and CPU usage), for about 10 minutes after I reboot my Java game server (that has a pool of 256 connections). There are not that many queries and I think it may be more of a MySQL misconfiguration problem. My server: 3.20 GHz * 6 core / 24 GB RAM / 64 bit Windows Server 2003. My game server: Java server, with 256 MySQL connections pool (MyISAM engine), about 500,000 accounts, and 9 million rows of game items in database and about 3,000 players are connected. After about 15 minutes of the game server reboot, the server resumes its stability and CPU usage drop down to 1% ~ 5% and memory to 6 GB. Here is a copy of my MySQL configuration. Also, any advice about my MySQL configuration will be appreciated. I really set it up almost at random. # Example MySQL config file for very large systems. # # This is for a large system with memory of 1G-2G where the system runs mainly # MySQL. # # You can copy this file to # /etc/my.cnf to set global options, # mysql-data-dir/my.cnf to set server-specific options (in this # installation this directory is C:\mysql\data) or # ~/.my.cnf to set user-specific options. # # In this file, you can use all long options that a program supports. # If you want to know which options a program supports, run the program # with the "--help" option. # The following options will be passed to all MySQL clients [client] #password = your_password port = 3306 socket = /tmp/mysql.sock # Here follows entries for some specific programs # The MySQL server [mysqld] #log=c:\mysql.log port = 3306 socket = /tmp/mysql.sock skip-locking key_buffer_size = 2572M max_allowed_packet = 64M table_open_cache = 512 sort_buffer_size = 128M read_buffer_size = 128M read_rnd_buffer_size = 128M myisam_sort_buffer_size = 500M thread_cache_size = 32 query_cache_size = 1948M # Try number of CPU's*2 for thread_concurrency thread_concurrency = 12 max_connections = 5000 # Don't listen on a TCP/IP port at all. This can be a security enhancement, # if all processes that need to connect to mysqld run on the same host. # All interaction with mysqld must be made via Unix sockets or named pipes. # Note that using this option without enabling named pipes on Windows # (via the "enable-named-pipe" option) will render mysqld useless! # #skip-networking # Replication Master Server (default) # binary logging is required for replication log-bin=mysql-bin # required unique id between 1 and 2^32 - 1 # defaults to 1 if master-host is not set # but will not function as a master if omitted server-id = 1 # Replication Slave (comment out master section to use this) # # To configure this host as a replication slave, you can choose between # two methods : # # 1) Use the CHANGE MASTER TO command (fully described in our manual) - # the syntax is: # # CHANGE MASTER TO MASTER_HOST=<host>, MASTER_PORT=<port>, # MASTER_USER=<user>, MASTER_PASSWORD=<password> ; # # where you replace <host>, <user>, <password> by quoted strings and # <port> by the master's port number (3306 by default). # # Example: # # CHANGE MASTER TO MASTER_HOST='125.564.12.1', MASTER_PORT=3306, # MASTER_USER='joe', MASTER_PASSWORD='secret'; # # OR # # 2) Set the variables below. However, in case you choose this method, then # start replication for the first time (even unsuccessfully, for example # if you mistyped the password in master-password and the slave fails to # connect), the slave will create a master.info file, and any later # change in this file to the variables' values below will be ignored and # overridden by the content of the master.info file, unless you shutdown # the slave server, delete master.info and restart the slaver server. # For that reason, you may want to leave the lines below untouched # (commented) and instead use CHANGE MASTER TO (see above) # # required unique id between 2 and 2^32 - 1 # (and different from the master) # defaults to 2 if master-host is set # but will not function as a slave if omitted #server-id = 2 # # The replication master for this slave - required #master-host = <hostname> # # The username the slave will use for authentication when connecting # to the master - required #master-user = <username> # # The password the slave will authenticate with when connecting to # the master - required #master-password = <password> # # The port the master is listening on. # optional - defaults to 3306 #master-port = <port> # # binary logging - not required for slaves, but recommended #log-bin=mysql-bin # # binary logging format - mixed recommended #binlog_format=mixed # Point the following paths to different dedicated disks #tmpdir = /tmp/ #log-update = /path-to-dedicated-directory/hostname # Uncomment the following if you are using InnoDB tables #innodb_data_home_dir = C:\mysql\data/ #innodb_data_file_path = ibdata1:2000M;ibdata2:10M:autoextend #innodb_log_group_home_dir = C:\mysql\data/ # You can set .._buffer_pool_size up to 50 - 80 % # of RAM but beware of setting memory usage too high #innodb_buffer_pool_size = 384M #innodb_additional_mem_pool_size = 20M # Set .._log_file_size to 25 % of buffer pool size #innodb_log_file_size = 100M #innodb_log_buffer_size = 8M #innodb_flush_log_at_trx_commit = 1 #innodb_lock_wait_timeout = 50 [mysqldump] quick max_allowed_packet = 64M [mysql] no-auto-rehash # Remove the next comment character if you are not familiar with SQL #safe-updates [myisamchk] key_buffer_size = 256M sort_buffer_size = 256M read_buffer = 8M write_buffer = 8M [mysqlhotcopy] interactive-timeout

    Read the article

  • Intermittent Connection Issues to SQL Server 2008 R2 RTM

    - by Chandan Jha
    The problem I am facing is a very complex one and inspite of trying to gather a root cause of the problem, I am standing at the same place after 2 months with just bits and pieces of information.Here is a scenario: There is a windows 2003 server which uses an system DSN ODBC connection. I looked into the driver properties and it is as follows: Name Version File SQL Server 2000.86.3959.00 SQLSRV32.DLL Now, this system DSN has been given configured with TCP\IP in Network Libraries and 'determine port dynamically' is checked. Now, lets come to the database destination. It is hosted on Windows 2008 having SQL 2008 R2 RTM version 64-bit. Now, I will give you a an overview about the events that happen and whatever troubleshooting I could perform: I get an email saying 'blah blah' failed and the only message their application gets is 'cannot connect to database' I go the SQL Server logs and find the following information: Login failed for user ''. Reason: An attempt to login using SQL authentication failed. Server is configured for Windows authentication only. [CLIENT: 10.0.0.xx Error: 18456, Severity: 14, State: 58.] A quick search shows that this error may come when an SQL Server is configured with windows authentication but its not true. We have mixed mode and connection issue is intermittent. This SQL Server is configured to run on a local system account but since we use only SQL Server accounts to connect to this, there should not be any Kerberos errors. When I run a profiler trace and see only 'existing connections', i see a lot of them coming from my client server displaying the sql user but NO hostname is shown. Textdata field shows TCP\IP information along with some arithabort and ansi-null settings. Now, I tried looking into ring connectivity buffer by using following: SELECTCAST(record AS XML) FROM sys.dm_os_ring_buffers WHERE ring_buffer_type = 'RING_BUFFER_CONNECTIVITY' One sample output is: <ConnectivityTraceRecord> <RecordType>Error</RecordType> <RecordSource>Tds</RecordSource> <Spid>118</Spid> <SniConnectionId>5124905D-D1EC-460E-AD78-201050B78C67</SniConnectionId> <OSError>0</OSError> <SniConsumerError>18452</SniConsumerError> <SniProvider>7</SniProvider> <State>1</State> <RemoteHost>10.0.0.21</RemoteHost> <RemotePort>5008</RemotePort> <LocalHost>10.1.0.38</LocalHost> <LocalPort>1433</LocalPort> <RecordTime>6/6/2012 21:14:57.527</RecordTime> <TdsBuffersInformation> <TdsInputBufferError>0</TdsInputBufferError> <TdsOutputBufferError>0</TdsOutputBufferError> <TdsInputBufferBytes>120</TdsInputBufferBytes> </TdsBuffersInformation> <TdsDisconnectFlags> <PhysicalConnectionIsKilled>0</PhysicalConnectionIsKilled> <DisconnectDueToReadError>0</DisconnectDueToReadError> <NetworkErrorFoundInInputStream>0</NetworkErrorFoundInInputStream> <ErrorFoundBeforeLogin>0</ErrorFoundBeforeLogin> <SessionIsKilled>0</SessionIsKilled> <NormalDisconnect>0</NormalDisconnect> </TdsDisconnectFlags> </ConnectivityTraceRecord> <Stack> <frame id="0">0X000000000174C34B</frame> <frame id="1">0X0000000001748FDD</frame> <frame id="2">0X0000000002461001</frame> <frame id="3">0X0000000000C47E98</frame> <frame id="4">0X00000000008015AD</frame> <frame id="5">0X0000000000801492</frame> <frame id="6">0X00000000003CBBD8</frame> <frame id="7">0X00000000003CB8BA</frame> <frame id="8">0X00000000003CB6FF</frame> <frame id="9">0X00000000008E8FB6</frame> <frame id="10">0X00000000008E9175</frame> <frame id="11">0X00000000008E9839</frame> <frame id="12">0X00000000008E9502</frame> <frame id="13">0X0000000074E437D7</frame> <frame id="14">0X0000000074E43894</frame> <frame id="15">0X00000000775A652D</frame> Somehow all the errors show error number 18452 whereas I never found this error in my SQL logs where I see only 18456. I am just stuck on a dead end because this connection issue appears intermittently. Sorry for a long question but I hope if you read this, you can make out that I tried a lot at my end before giving up.

    Read the article

  • Poor upload/download speed on 2 x ADSL lines into a Cisco 2621XM

    - by 2020mobile
    Hi, Sorry never been on this site before so I apologise if not the right section or even forum. I have users complaining of very slow internetn connectivity on site and have checked with our ISP who have said that the line is testing at 8mb. We have 2 x BT lines that have our ISP broadand on them. Both lines go into a Cisco 2600 series router that then has a PIX firewall off that. Connectivity is successful just gone really slow and unable to download anything. Config is below: version 12.3 no service pad service tcp-keepalives-in service tcp-keepalives-out service timestamps debug datetime msec service timestamps log datetime msec service password-encryption ! hostname ROUTER-ADSL-INTERNET ! logging buffered 16384 informational enable secret xxx enable password xxx ! username xxx username xxx clock summer-time UK recurring last Sun Mar 1:00 last Sun Oct 1:00 aaa new-model ! ! aaa authentication login default local aaa authorization exec default local aaa session-id common ip subnet-zero no ip source-route ! ! ! ip audit notify log ip audit po max-events 100 no ip bootp server ip name-server 213.208.106.212 no mpls ldp logging neighbor-changes no ftp-server write-enable ! ! ! ! ! ! ! ! ! ! no voice hpi capture buffer no voice hpi capture destination ! ! ! ! ! ! ! ! interface ATM0/0 description 01270 111111 no ip address no atm ilmi-keepalive pvc 0/38 encapsulation aal5mux ppp dialer dialer pool-member 1 ! dsl operating-mode auto ! interface FastEthernet0/0 ip address 82.133.32.9 255.255.255.248 shutdown speed 100 full-duplex no cdp enable ! interface ATM0/1 description 01270 222222 no ip address no atm ilmi-keepalive pvc 0/38 encapsulation aal5mux ppp dialer dialer pool-member 1 ! dsl operating-mode auto ! interface FastEthernet0/1 ip address 217.146.115.49 255.255.255.240 duplex auto speed auto no cdp enable ! interface Dialer0 ip address 217.146.115.250 255.255.255.248 encapsulation ppp dialer pool 1 dialer-group 1 ppp authentication chap callin ppp chap hostname [email protected] ppp chap password 7 xxxxx ppp multilink ! ip classless ip route 0.0.0.0 0.0.0.0 Dialer0 ! no ip http server no ip http secure-server ! no logging trap access-list 10 permit 217.146.115.50 access-list 10 permit 82.133.32.10 access-list 10 deny any access-list 22 permit 217.146.115.50 access-list 22 permit 217.206.239.86 access-list 22 permit 82.133.32.10 access-list 22 deny any dialer-list 1 protocol ip permit no cdp run ! ! snmp-server community xxxxxx RO 10 snmp-server enable traps tty radius-server authorization permit missing Service-Type ! ! ! ! ! ! line con 0 exec-timeout 5 0 password 7 xxxxxx line aux 0 no exec line vty 0 4 access-class 22 in exec-timeout 5 0 password 7 xxxxxx transport input telnet ssh transport output none line vty 5 15 password 7 xxxxxx transport input telnet ssh ! ntp clock-period 17180095 ntp server 130.88.200.98 ! ! end Now my knowledge is very limited but ISP have said that while the lines are bonded each needs a seperate login as they've recently changed their L2TP router and that enforces the use of seperate logins - when the lines were configured we were given two logins. So, my question is what changes do I need to make to the config in order to get this working? it was ok before their change and I do have another login :- 01270 111111 - [email protected] 01270 222222 - [email protected] Apologies for the long email and thanks for taking the time to read it. Any more info I can provide please let me know. Thanks,

    Read the article

  • Improving SAS multipath to JBOD performance on Linux

    - by user36825
    Hello all I'm trying to optimize a storage setup on some Sun hardware with Linux. Any thoughts would be greatly appreciated. We have the following hardware: Sun Blade X6270 2* LSISAS1068E SAS controllers 2* Sun J4400 JBODs with 1 TB disks (24 disks per JBOD) Fedora Core 12 2.6.33 release kernel from FC13 (also tried with latest 2.6.31 kernel from FC12, same results) Here's the datasheet for the SAS hardware: http://www.sun.com/storage/storage_networking/hba/sas/PCIe.pdf It's using PCI Express 1.0a, 8x lanes. With a bandwidth of 250 MB/sec per lane, we should be able to do 2000 MB/sec per SAS controller. Each controller can do 3 Gb/sec per port and has two 4 port PHYs. We connect both PHYs from a controller to a JBOD. So between the JBOD and the controller we have 2 PHYs * 4 SAS ports * 3 Gb/sec = 24 Gb/sec of bandwidth, which is more than the PCI Express bandwidth. With write caching enabled and when doing big writes, each disk can sustain about 80 MB/sec (near the start of the disk). With 24 disks, that means we should be able to do 1920 MB/sec per JBOD. multipath { rr_min_io 100 uid 0 path_grouping_policy multibus failback manual path_selector "round-robin 0" rr_weight priorities alias somealias no_path_retry queue mode 0644 gid 0 wwid somewwid } I tried values of 50, 100, 1000 for rr_min_io, but it doesn't seem to make much difference. Along with varying rr_min_io I tried adding some delay between starting the dd's to prevent all of them writing over the same PHY at the same time, but this didn't make any difference, so I think the I/O's are getting properly spread out. According to /proc/interrupts, the SAS controllers are using a "IR-IO-APIC-fasteoi" interrupt scheme. For some reason only core #0 in the machine is handling these interrupts. I can improve performance slightly by assigning a separate core to handle the interrupts for each SAS controller: echo 2 /proc/irq/24/smp_affinity echo 4 /proc/irq/26/smp_affinity Using dd to write to the disk generates "Function call interrupts" (no idea what these are), which are handled by core #4, so I keep other processes off this core too. I run 48 dd's (one for each disk), assigning them to cores not dealing with interrupts like so: taskset -c somecore dd if=/dev/zero of=/dev/mapper/mpathx oflag=direct bs=128M oflag=direct prevents any kind of buffer cache from getting involved. None of my cores seem maxed out. The cores dealing with interrupts are mostly idle and all the other cores are waiting on I/O as one would expect. Cpu0 : 0.0%us, 1.0%sy, 0.0%ni, 91.2%id, 7.5%wa, 0.0%hi, 0.2%si, 0.0%st Cpu1 : 0.0%us, 0.8%sy, 0.0%ni, 93.0%id, 0.2%wa, 0.0%hi, 6.0%si, 0.0%st Cpu2 : 0.0%us, 0.6%sy, 0.0%ni, 94.4%id, 0.1%wa, 0.0%hi, 4.8%si, 0.0%st Cpu3 : 0.0%us, 7.5%sy, 0.0%ni, 36.3%id, 56.1%wa, 0.0%hi, 0.0%si, 0.0%st Cpu4 : 0.0%us, 1.3%sy, 0.0%ni, 85.7%id, 4.9%wa, 0.0%hi, 8.1%si, 0.0%st Cpu5 : 0.1%us, 5.5%sy, 0.0%ni, 36.2%id, 58.3%wa, 0.0%hi, 0.0%si, 0.0%st Cpu6 : 0.0%us, 5.0%sy, 0.0%ni, 36.3%id, 58.7%wa, 0.0%hi, 0.0%si, 0.0%st Cpu7 : 0.0%us, 5.1%sy, 0.0%ni, 36.3%id, 58.5%wa, 0.0%hi, 0.0%si, 0.0%st Cpu8 : 0.1%us, 8.3%sy, 0.0%ni, 27.2%id, 64.4%wa, 0.0%hi, 0.0%si, 0.0%st Cpu9 : 0.1%us, 7.9%sy, 0.0%ni, 36.2%id, 55.8%wa, 0.0%hi, 0.0%si, 0.0%st Cpu10 : 0.0%us, 7.8%sy, 0.0%ni, 36.2%id, 56.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu11 : 0.0%us, 7.3%sy, 0.0%ni, 36.3%id, 56.4%wa, 0.0%hi, 0.0%si, 0.0%st Cpu12 : 0.0%us, 5.6%sy, 0.0%ni, 33.1%id, 61.2%wa, 0.0%hi, 0.0%si, 0.0%st Cpu13 : 0.1%us, 5.3%sy, 0.0%ni, 36.1%id, 58.5%wa, 0.0%hi, 0.0%si, 0.0%st Cpu14 : 0.0%us, 4.9%sy, 0.0%ni, 36.4%id, 58.7%wa, 0.0%hi, 0.0%si, 0.0%st Cpu15 : 0.1%us, 5.4%sy, 0.0%ni, 36.5%id, 58.1%wa, 0.0%hi, 0.0%si, 0.0%st Given all this, the throughput reported by running "dstat 10" is in the range of 2200-2300 MB/sec. Given the math above I would expect something in the range of 2*1920 ~= 3600+ MB/sec. Does anybody have any idea where my missing bandwidth went? Thanks!

    Read the article

  • Bind9 as a caching resolver fails with mismatch ID on localhost but not external IP

    - by argibbs
    I'm running Ubuntu 12.04 LTS on a machine on my private network. I have bind9 installed (v9.8.1-P1) via aptitude, so it appears to have put all the bits in the right places and the service starts automatically. I plan on adding some zones later, but first I'm just trying to get it working as a caching resolver. I installed bind, configured it, and starting using it. Initially I thought it was working ok, but then I found some sites weren't being resolved. I've pinned it down to being linked to the size of the result and bind failing-over to TCP mode. So: I'm trying to find out why bind is failing when I query for domain info and the result is 512 bytes (causing a truncation and retry on TCP). Specifically it fails with ID mismatches if I point dig at localhost, but works when I query the machine's own IP (192.168.0.2). This appears to be backwards to the problem that most people have when using bind (fails on external ip, works on localhost). If I do dig @localhost google.com (which has a response of <512 bytes) then it works; I get no warnings, and plenty of output. $ dig @localhost google.com ; <<>> DiG 9.8.1-P1 <<>> @localhost google.com [snip lots of output] ;; Query time: 39 msec ;; SERVER: 127.0.0.1#53(127.0.0.1) ;; WHEN: Thu Oct 17 23:08:34 2013 ;; MSG SIZE rcvd: 495 If I do dig @localhost play.google.com (which has a larger response) then I get back something like: $ dig @localhost play.google.com ;; Truncated, retrying in TCP mode. ;; ERROR: ID mismatch: expected ID 3696, got 27130 This seems to be standard, documented behaviour - when the UDP response is large (here 'large' == 512 bytes) it falls back to TCP. The ID mismatch is not expected though. If I do dig @192.168.0.2 play.google.com then I still get the warning about using TCP mode, but it otherwise works $ dig @192.168.0.2 play.google.com ;; Truncated, retrying in TCP mode. ; <<>> DiG 9.8.1-P1 <<>> @192.168.0.2 play.google.com [snip most of the output] ;; Query time: 5 msec ;; SERVER: 192.168.0.2#53(192.168.0.2) ;; WHEN: Thu Oct 17 23:05:55 2013 ;; MSG SIZE rcvd: 521 At the moment I've not set up any zones in my local instance, so it's just acting as a caching resolver. My options config is pretty much unchanged from standard, I've got the following set: options { directory "/var/cache/bind"; allow-query { 192.168/16; 127.0.0.1; }; forwarders { 8.8.8.8; 8.8.4.4; }; dnssec-validation auto; edns-udp-size 4096 ; allow-transfer { any; }; auth-nxdomain no; # conform to RFC1035 listen-on-v6 { any; }; }; And my /etc/resolv.conf is just nameserver 127.0.0.1 search .local The problem definitely seems linked to the failover to TCP mode: if I do dig +bufsize=4096 @localhost play.google.com then it works; no warning about failover to TCP, no ID mismatch, and a standard looking result. To be honest, if there was a way to force bind to use a much larger UDP buffer, that'd probably be good enough for me, but all I've been able to find mention of is max-udp-size 4096 and that doesn't change the behaviour in any way. I've also tried setting edns-udp-size 512 in case the problem is some weird EDNS issue with my router (which seems unlikely since the +bufsize=4096 flag works fine). I've also tried dig +trace @localhost play.google.com; this works. No truncation/TCP warning, and a full result. I've also tried changing the servers used in the forwarder (e.g. to OpenDNS), but that makes no difference. There's one last data point: if I repetitively do dig @localhost play.google.com I don't always get an ID mismatch, but sometimes a REFUSED error. I'm much more likely to get a REFUSED error if I dig the non-localhost IP (192.168.0.2) first: $ dig @localhost play.google.com ;; Truncated, retrying in TCP mode. ; <<>> DiG 9.8.1-P1 <<>> @localhost play.google.com ; (1 server found) ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: REFUSED, id: 35104 ;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;play.google.com. IN A ;; Query time: 4 msec ;; SERVER: 127.0.0.1#53(127.0.0.1) ;; WHEN: Thu Oct 17 23:20:13 2013 ;; MSG SIZE rcvd: 33 Any insights or things to try would be much appreciated.

    Read the article

  • Why might login failures cause SQL 2005 to dump and ditch?

    - by Byron Sommardahl
    Our SQL 2005 server began timing out and finally stopped responding on Oct 26th. The application logs showed a ton of 17883 events leading up to a reboot. After the reboot everything was fine but we were still scratching our heads. Fast forward 6 days... it happened again. Then again 2 days later. The last night. Today it has happened three times to far. The timeline is fairly predictable when it happens: Trans log backups. Login failure for "user2". Minidump Another minidump for the scheduler Repeated 17883 events. Server fails little by little until it won't accept any requests. Reboot is all that gets us going again (a band-aid) Interesting, though, is that the server box itself doesn't seem to have any problems. CPU usage is normal. Network connectivity is fine. We can remote in and look at logs. Management studio does eventually bog down, though. Today, for the first time, we tried stopping services instead of a reboot. All services stopped on their own except for the SQL Server service. We finally did an "end task" on that one and were able to bring everything back up. It worked fine for about 30 minutes until we started seeing timeouts and 17883's again. This time, probably because we didn't reboot all the way, we saw a bunch of 844 events mixed in with the 17883's. Our entire tech team here is scratching heads... some ideas we're kicking around: MS Cumulative Update hit around the same time as when we first had a problem. Since then, we've rolled it back. Maybe it didn't rollback all the way. The situation looks and feels like an unhandled "stack overflow" (no relation) in that it starts small and compounds over time. Problem with this is that there isn't significant CPU usage. At any rate, we're not ruling SQL 2005 bug out at all. Maybe we added one too many import processes and have reached our limit on this box. (hard to believe). Looking at SQLDUMP0151.log at the time of one of the crashes. There are some "login failures" and then there are two stack dumps. 1st a normal stack dump, 2nd for a scheduler dump. Here's a snippet: (sorry for the lack of line breaks) 2009-11-10 11:59:14.95 spid63 Using 'xpsqlbot.dll' version '2005.90.3042' to execute extended stored procedure 'xp_qv'. This is an informational message only; no user action is required. 2009-11-10 11:59:15.09 spid63 Using 'xplog70.dll' version '2005.90.3042' to execute extended stored procedure 'xp_msver'. This is an informational message only; no user action is required. 2009-11-10 12:02:33.24 Logon Error: 18456, Severity: 14, State: 16. 2009-11-10 12:02:33.24 Logon Login failed for user 'standard_user2'. [CLIENT: 50.36.172.101] 2009-11-10 12:08:21.12 Logon Error: 18456, Severity: 14, State: 16. 2009-11-10 12:08:21.12 Logon Login failed for user 'standard_user2'. [CLIENT: 50.36.172.101] 2009-11-10 12:13:49.38 Logon Error: 18456, Severity: 14, State: 16. 2009-11-10 12:13:49.38 Logon Login failed for user 'standard_user2'. [CLIENT: 50.36.172.101] 2009-11-10 12:15:16.88 Logon Error: 18456, Severity: 14, State: 16. 2009-11-10 12:15:16.88 Logon Login failed for user 'standard_user2'. [CLIENT: 50.36.172.101] 2009-11-10 12:18:24.41 Logon Error: 18456, Severity: 14, State: 16. 2009-11-10 12:18:24.41 Logon Login failed for user 'standard_user2'. [CLIENT: 50.36.172.101] 2009-11-10 12:18:38.88 spid111 Using 'dbghelp.dll' version '4.0.5' 2009-11-10 12:18:39.02 spid111 *Stack Dump being sent to C:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\LOG\SQLDump0149.txt 2009-11-10 12:18:39.02 spid111 SqlDumpExceptionHandler: Process 111 generated fatal exception c0000005 EXCEPTION_ACCESS_VIOLATION. SQL Server is terminating this process. 2009-11-10 12:18:39.02 spid111 * ***************************************************************************** 2009-11-10 12:18:39.02 spid111 * 2009-11-10 12:18:39.02 spid111 * BEGIN STACK DUMP: 2009-11-10 12:18:39.02 spid111 * 11/10/09 12:18:39 spid 111 2009-11-10 12:18:39.02 spid111 * 2009-11-10 12:18:39.02 spid111 * 2009-11-10 12:18:39.02 spid111 * Exception Address = 0159D56F Module(sqlservr+0059D56F) 2009-11-10 12:18:39.02 spid111 * Exception Code = c0000005 EXCEPTION_ACCESS_VIOLATION 2009-11-10 12:18:39.02 spid111 * Access Violation occurred writing address 00000000 2009-11-10 12:18:39.02 spid111 * Input Buffer 138 bytes - 2009-11-10 12:18:39.02 spid111 * " N R S C _ P T A 22 00 4e 00 52 00 53 00 43 00 5f 00 50 00 54 00 41 00 2009-11-10 12:18:39.02 spid111 * C _ Q A . d b o . 43 00 5f 00 51 00 41 00 2e 00 64 00 62 00 6f 00 2e 00 2009-11-10 12:18:39.02 spid111 * U s p S e l N e x 55 00 73 00 70 00 53 00 65 00 6c 00 4e 00 65 00 78 00 2009-11-10 12:18:39.02 spid111 * t A c c o u n t 74 00 41 00 63 00 63 00 6f 00 75 00 6e 00 74 00 00 00 2009-11-10 12:18:39.02 spid111 * @ i n t F o r m I 0a 40 00 69 00 6e 00 74 00 46 00 6f 00 72 00 6d 00 49 2009-11-10 12:18:39.02 spid111 * D & 8 @ t x 00 44 00 00 26 04 04 38 00 00 00 09 40 00 74 00 78 00 2009-11-10 12:18:39.02 spid111 * t A l i a s § 74 00 41 00 6c 00 69 00 61 00 73 00 00 a7 0f 00 09 04 2009-11-10 12:18:39.02 spid111 * Ð GQE9732 d0 00 00 07 00 47 51 45 39 37 33 32 2009-11-10 12:18:39.02 spid111 * 2009-11-10 12:18:39.02 spid111 * 2009-11-10 12:18:39.02 spid111 * MODULE BASE END SIZE 2009-11-10 12:18:39.02 spid111 * sqlservr 01000000 02C09FFF 01c0a000 2009-11-10 12:18:39.02 spid111 * ntdll 7C800000 7C8C1FFF 000c2000 2009-11-10 12:18:39.02 spid111 * kernel32 77E40000 77F41FFF 00102000

    Read the article

  • Configure PERL DBI and DBD in Linux

    - by Balualways
    I am new to Perl and I work in a Linux OEL 5x server. I am trying to configure the Perl DB modules for Oracle connectivity (DBD and DBI modules). Can anyone help me out in the installation procedure? I had tried CPAN didn't really worked out. Any help would be appreciated. I am not quite sure I need to initialize any variables other than $LD_LIBRARY_PATH and $ORACLE_HOME These are my observations: ISSUE:: I am getting the following issue while using the DBI module to connect to Oracle: install_driver(Oracle) failed: Can't locate loadable object for module DBD::Oracle in @INC (@INC contains: /usr/lib64/perl5/site_perl/5.8.8/x86_64-linux-thread-multi /usr/lib/perl5/site_perl/5.8.8 /usr/lib/perl5/site_perl /usr/lib64/perl5/vendor_perl/5.8.8/x86_64-linux-thread-multi /usr/lib/perl5/vendor_perl/5.8.8 /usr/lib/perl5/vendor_perl /usr/lib64/perl5/5.8.8/x86_64-linux-thread-multi /usr/lib/perl5/5.8.8 .) at (eval 3) line 3 Compilation failed in require at (eval 3) line 3. Perhaps a module that DBD::Oracle requires hasn't been fully installed at connectdb.pl line 57 I had installed the DBD for oracle from /usr/lib64/perl5/5.8.8/x86_64-linux-thread-multi/DBD/DBD-Oracle-1.50 Could you please take a look into the steps and correct me if I am wrong: Observations: $ echo $LD_LIBRARY_PATH /opt/CA/UnicenterAutoSysJM/autosys/lib:/opt/CA/SharedComponents/Csam/SockAdapter/lib:/opt/CA/SharedComponents/ETPKI/lib:/opt/CA/CAlib $ echo $ORACLE_HOME /usr/local/oracle/ORA This is how I tried to install the DBD module: Download the file DBD 1.50 for Oracle Copy to /usr/lib64/perl5/5.8.8/x86_64-linux-thread-multi/DBD Untar and Makefile.PL . Message: Using DBI 1.52 (for perl 5.008008 on x86_64-linux-thread-multi) installed in /usr/lib64/perl5/vendor_perl/5.8.8/x86_64-linux-thread-multi/auto/DBI/ Configuring DBD::Oracle for perl 5.008008 on linux (x86_64-linux-thread-multi) Remember to actually *READ* the README file! Especially if you have any problems. Installing on a linux, Ver#2.6 Using Oracle in /opt/oracle/product/10.2 DEFINE _SQLPLUS_RELEASE = "1002000400" (CHAR) Oracle version 10.2.0.4 (10.2) Found /opt/oracle/product/10.2/rdbms/demo/demo_rdbms.mk Found /opt/oracle/product/10.2/rdbms/demo/demo_rdbms64.mk Found /opt/oracle/product/10.2/rdbms/lib/ins_rdbms.mk Using /opt/oracle/product/10.2/rdbms/demo/demo_rdbms.mk Your LD_LIBRARY_PATH env var is set to '/usr/local/oracle/ORA/lib:/usr/dt/lib:/usr/openwin/lib:/usr/local/oracle/ORA/ows/cartx/wodbc/1.0/util/lib:/usr/local/oracle/ORA/lib:/usr/local/sybase/OCS-12_0/lib:/usr/local/sybase/lib:/home/oracle/jdbc/jdbcoci73/lib:./' WARNING: Your LD_LIBRARY_PATH env var doesn't include '/opt/oracle/product/10.2/lib' but probably needs to. Reading /opt/oracle/product/10.2/rdbms/demo/demo_rdbms.mk Reading /usr/local/oracle/ORA/rdbms/lib/env_rdbms.mk Attempting to discover Oracle OCI build rules sh: make: command not found by executing: [make -f /opt/oracle/product/10.2/rdbms/demo/demo_rdbms.mk build ECHODO=echo ECHO=echo GENCLNTSH='echo genclntsh' CC=true OPTIMIZE= CCFLAGS= EXE=DBD_ORA_EXE OBJS=DBD_ORA_OBJ.o] WARNING: Oracle build rule discovery failed (32512) Add path to make command into your PATH environment variable. Oracle oci build prolog: [sh: make: command not found] Oracle oci build command: [] WARNING: Unable to interpret Oracle build commands from /opt/oracle/product/10.2/rdbms/demo/demo_rdbms.mk. (Will continue by using fallback approach.) Please report this to [email protected]. See README for what to include. Found header files in /opt/oracle/product/10.2/rdbms/public. client_version=10.2 DEFINE= -Wall -Wno-comment -DUTF8_SUPPORT -DORA_OCI_VERSION=\"10.2.0.4\" -DORA_OCI_102 Checking for functioning wait.ph System: perl5.008008 linux ca-build9.us.oracle.com 2.6.20-1.3002.fc6xen #1 smp thu apr 30 18:08:39 pdt 2009 x86_64 x86_64 x86_64 gnulinux Compiler: gcc -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -D_REENTRANT -D_GNU_SOURCE -fno-strict-aliasing -pipe -Wdeclaration-after-statement -I/usr/local/include -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -I/usr/include/gdbm Linker: not found Sysliblist: -ldl -lm -lpthread -lnsl -lirc Oracle makefiles would have used these definitions but we override them: CC: cc CFLAGS: $(GFLAG) $(OPTIMIZE) $(CDEBUG) $(CCFLAGS) $(PFLAGS)\ $(SHARED_CFLAG) $(USRFLAGS) [$(GFLAG) -O3 $(CDEBUG) -m32 $(TRIGRAPHS_CCFLAGS) -fPIC -I/usr/local/oracle/ORA/rdbms/demo -I/usr/local/oracle/ORA/rdbms/public -I/usr/local/oracle/ORA/plsql/public -I/usr/local/oracle/ORA/network/public -DLINUX -D_GNU_SOURCE -D_LARGEFILE64_SOURCE=1 -D_LARGEFILE_SOURCE=1 -DSLTS_ENABLE -DSLMXMX_ENABLE -D_REENTRANT -DNS_THREADS -fno-strict-aliasing $(LPFLAGS) $(USRFLAGS)] build: $(CC) $(ORALIBPATH) -o $(EXE) $(OBJS) $(OCISHAREDLIBS) [ cc -L$(LIBHOME) -L/usr/local/oracle/ORA/rdbms/lib/ -o $(EXE) $(OBJS) -lclntsh $(EXPDLIBS) $(EXOSLIBS) -ldl -lm -lpthread -lnsl -lirc -ldl -lm $(USRLIBS) -lpthread] LDFLAGS: $(LDFLAGS32) [-m32 -o $@ -L/usr/local/oracle/ORA/rdbms//lib32/ -L/usr/local/oracle/ORA/lib32/ -L/usr/local/oracle/ORA/lib32/stubs/] Linking with /usr/local/oracle/ORA/rdbms/lib/defopt.o -lclntsh -ldl -lm -lpthread -lnsl -lirc -ldl -lm -lpthread [from $(DEF_OPT) $(OCISHAREDLIBS)] Checking if your kit is complete... Looks good LD_RUN_PATH=/usr/local/oracle/ORA/lib Using DBD::Oracle 1.50. Using DBD::Oracle 1.50. Using DBI 1.52 (for perl 5.008008 on x86_64-linux-thread-multi) installed in /usr/lib64/perl5/vendor_perl/5.8.8/x86_64-linux-thread-multi/auto/DBI/ Writing Makefile for DBD::Oracle Writing MYMETA.yml and MYMETA.json *** If you have problems... read all the log printed above, and the README and README.help.txt files. (Of course, you have read README by now anyway, haven't you?)

    Read the article

  • Windows 7 client can't connect to CentOS PPTP VPN

    - by Chris
    Have a Macintosh (10.8.2) that connects just fine to a CentOS 6.0 virtual private server (OpenVZ, with PPP added by the host) via PPTP. A Windows 7 Home Premium client (virtualized in Sun's Virtual Box), on the same computer, using the same Ethernet connection, cannot connect to the Linux VPN server. I have iptables disabled (for testing) on the Linux box. I have the Windows firewall turned off. /var/log/messages looks like this, for a Windows connection: Oct 12 18:44:30 production pptpd[1880]: CTRL: Client 66.104.246.168 control connection started Oct 12 18:44:30 production pptpd[1880]: CTRL: Starting call (launching pppd, opening GRE) Oct 12 18:44:30 production pppd[1881]: Plugin /usr/lib/pptpd/pptpd-logwtmp.so loaded. Oct 12 18:44:30 production pppd[1881]: pptpd-logwtmp: $Version$ Oct 12 18:44:30 production pppd[1881]: pppd options in effect: Oct 12 18:44:30 production pppd[1881]: debug#011#011# (from /etc/ppp/options.pptpd) Oct 12 18:44:30 production pppd[1881]: nologfd#011#011# (from /etc/ppp/options.pptpd) Oct 12 18:44:30 production pppd[1881]: dump#011#011# (from /etc/ppp/options.pptpd) Oct 12 18:44:30 production pppd[1881]: plugin /usr/lib/pptpd/pptpd-logwtmp.so#011#011# (from command line) Oct 12 18:44:30 production pppd[1881]: require-mschap-v2#011#011# (from /etc/ppp/options.pptpd) Oct 12 18:44:30 production pppd[1881]: refuse-pap#011#011# (from /etc/ppp/options.pptpd) Oct 12 18:44:30 production pppd[1881]: refuse-chap#011#011# (from /etc/ppp/options.pptpd) Oct 12 18:44:30 production pppd[1881]: refuse-mschap#011#011# (from /etc/ppp/options.pptpd) Oct 12 18:44:30 production pppd[1881]: name pptpd#011#011# (from /etc/ppp/options.pptpd) Oct 12 18:44:30 production pppd[1881]: pptpd-original-ip 66.104.246.168#011#011# (from command line) Oct 12 18:44:30 production pppd[1881]: 115200#011#011# (from command line) Oct 12 18:44:30 production pppd[1881]: lock#011#011# (from /etc/ppp/options.pptpd) Oct 12 18:44:30 production pppd[1881]: local#011#011# (from command line) Oct 12 18:44:30 production pppd[1881]: novj#011#011# (from /etc/ppp/options.pptpd) Oct 12 18:44:30 production pppd[1881]: novjccomp#011#011# (from /etc/ppp/options.pptpd) Oct 12 18:44:30 production pppd[1881]: ipparam 66.104.246.168#011#011# (from command line) Oct 12 18:44:30 production pppd[1881]: proxyarp#011#011# (from /etc/ppp/options.pptpd) Oct 12 18:44:30 production pppd[1881]: 192.168.97.1:192.168.97.10#011#011# (from command line) Oct 12 18:44:30 production pppd[1881]: nobsdcomp#011#011# (from /etc/ppp/options.pptpd) Oct 12 18:44:30 production pppd[1881]: require-mppe-128#011#011# (from /etc/ppp/options.pptpd) Oct 12 18:44:30 production pppd[1881]: mppe-stateful#011#011# (from /etc/ppp/options.pptpd) Oct 12 18:44:30 production pppd[1881]: pppd 2.4.5 started by root, uid 0 Oct 12 18:44:30 production pppd[1881]: Using interface ppp0 Oct 12 18:44:30 production pppd[1881]: Connect: ppp0 <--> /dev/pts/1 (At this point the Windows machine displays a dialog, reading: "Verifying user name and password...") Oct 12 18:45:00 production pppd[1881]: LCP: timeout sending Config-Requests Oct 12 18:45:00 production pppd[1881]: Connection terminated. Oct 12 18:45:00 production pppd[1881]: Modem hangup Oct 12 18:45:00 production pppd[1881]: Exit. Oct 12 18:45:00 production pptpd[1880]: GRE: read(fd=6,buffer=8059660,len=8196) from PTY failed: status = -1 error = Input/output error, usually caused by unexpected termination of pppd, check option syntax and pppd logs Oct 12 18:45:00 production pptpd[1880]: CTRL: PTY read or GRE write failed (pty,gre)=(6,7) Oct 12 18:45:00 production pptpd[1880]: CTRL: Client 66.104.246.168 control connection finished The Macintosh connecting looks like this in /var/log/messages: Oct 12 18:50:49 production pptpd[1920]: CTRL: Client 66.104.246.168 control connection started Oct 12 18:50:49 production pptpd[1920]: CTRL: Starting call (launching pppd, opening GRE) Oct 12 18:50:49 production pppd[1921]: Plugin /usr/lib/pptpd/pptpd-logwtmp.so loaded. Oct 12 18:50:49 production pppd[1921]: pptpd-logwtmp: $Version$ Oct 12 18:50:49 production pppd[1921]: pppd options in effect: Oct 12 18:50:49 production pppd[1921]: debug#011#011# (from /etc/ppp/options.pptpd) Oct 12 18:50:49 production pppd[1921]: nologfd#011#011# (from /etc/ppp/options.pptpd) Oct 12 18:50:49 production pppd[1921]: dump#011#011# (from /etc/ppp/options.pptpd) Oct 12 18:50:49 production pppd[1921]: plugin /usr/lib/pptpd/pptpd-logwtmp.so#011#011# (from command line) Oct 12 18:50:49 production pppd[1921]: require-mschap-v2#011#011# (from /etc/ppp/options.pptpd) Oct 12 18:50:49 production pppd[1921]: refuse-pap#011#011# (from /etc/ppp/options.pptpd) Oct 12 18:50:49 production pppd[1921]: refuse-chap#011#011# (from /etc/ppp/options.pptpd) Oct 12 18:50:49 production pppd[1921]: refuse-mschap#011#011# (from /etc/ppp/options.pptpd) Oct 12 18:50:49 production pppd[1921]: name pptpd#011#011# (from /etc/ppp/options.pptpd) Oct 12 18:50:49 production pppd[1921]: pptpd-original-ip 66.104.246.168#011#011# (from command line) Oct 12 18:50:49 production pppd[1921]: 115200#011#011# (from command line) Oct 12 18:50:49 production pppd[1921]: lock#011#011# (from /etc/ppp/options.pptpd) Oct 12 18:50:49 production pppd[1921]: local#011#011# (from command line) Oct 12 18:50:49 production pppd[1921]: novj#011#011# (from /etc/ppp/options.pptpd) Oct 12 18:50:49 production pppd[1921]: novjccomp#011#011# (from /etc/ppp/options.pptpd) Oct 12 18:50:49 production pppd[1921]: ipparam 66.104.246.168#011#011# (from command line) Oct 12 18:50:49 production pppd[1921]: proxyarp#011#011# (from /etc/ppp/options.pptpd) Oct 12 18:50:49 production pppd[1921]: 192.168.97.1:192.168.97.10#011#011# (from command line) Oct 12 18:50:49 production pppd[1921]: nobsdcomp#011#011# (from /etc/ppp/options.pptpd) Oct 12 18:50:49 production pppd[1921]: require-mppe-128#011#011# (from /etc/ppp/options.pptpd) Oct 12 18:50:49 production pppd[1921]: mppe-stateful#011#011# (from /etc/ppp/options.pptpd) Oct 12 18:50:49 production pppd[1921]: pppd 2.4.5 started by root, uid 0 Oct 12 18:50:49 production pppd[1921]: Using interface ppp0 Oct 12 18:50:49 production pppd[1921]: Connect: ppp0 <--> /dev/pts/1 Oct 12 18:50:52 production pppd[1921]: MPPE 128-bit stateless compression enabled Oct 12 18:50:52 production pppd[1921]: Unsupported protocol 'IPv6 Control Protocol' (0x8057) received Oct 12 18:50:52 production pppd[1921]: Unsupported protocol 'Apple Client Server Protocol Control' (0x8235) received Oct 12 18:50:52 production pppd[1921]: Cannot determine ethernet address for proxy ARP Oct 12 18:50:52 production pppd[1921]: local IP address 192.168.97.1 Oct 12 18:50:52 production pppd[1921]: remote IP address 192.168.97.10 Oct 12 18:50:52 production pppd[1921]: pptpd-logwtmp.so ip-up ppp0 chris 66.104.246.168 I'm baffled...

    Read the article

  • FFmpeg audio dont work in converted videos

    - by Juddy Swaft
    NOTICE: when i convert videos via terminal and download them from ftp into pc the audio works fine. I use: if($ext == "avi" && $convert_avi == true) { $convert_source = _VIDEOS_DIR_PATH.$new_name; $conv_name = substr(md5($file['name'].rand(1,888)), 2, 10).".mp4"; $converted_file = _VIDEOS_DIR_PATH.$conv_name; $ffmpeg_command = 'ffmpeg -i '.$convert_source.' -acodec libmp3lame -vcodec libx264 -s 1280x720 -ar 44100 -async 44100 -r 29.970 -ac 2 -qscale 5 '.$converted_file; echo exec($ffmpeg_command); $sql = "UPDATE pm_temp SET url = '".$conv_name."' WHERE url = '".$new_name."' LIMIT 1"; $result = @mysql_query($sql); unlink($convert_source); } This code to convert avi to mp4 ffmpeg concole output: root@1tb:~# ffmpeg -i sample.avi -acodec libmp3lame -vcodec libx264 -s 1280x720 -ar 44100 -async 44100 -r 29.970 -ac 2 -qscale 5 goodsample.mp4 ffmpeg version 0.7.15, Copyright (c) 2000-2013 the FFmpeg developers built on Feb 22 2013 07:18:58 with gcc 4.4.5 configuration: --enable-libdc1394 --prefix=/usr --extra-cflags='-Wall -g ' --cc='ccache cc' --enable-shared --enable-libmp3lame --enable-gpl --enable-libvorbis --enable-pthreads --enable-libfaac --enable-libxvid --enable-postproc --enable-x11grab --enable-libgsm --enable-libtheora --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libx264 --enable-libspeex --enable-nonfree --disable-stripping --enable-avfilter --enable-libdirac --disable-decoder=libdirac --enable-libfreetype --enable-libschroedinger --disable-encoder=libschroedinger - s libavutil 50. 43. 0 / 50. 43. 0 libavcodec 52.123. 0 / 52.123. 0 libavformat 52.111. 0 / 52.111. 0 libavdevice 52. 5. 0 / 52. 5. 0 libavfilter 1. 80. 0 / 1. 80. 0 libswscale 0. 14. 1 / 0. 14. 1 libpostproc 51. 2. 0 / 51. 2. 0 [mp3 @ 0x191d4100] Header missing [mpeg4 @ 0x191d1dc0] Invalid and inefficient vfw-avi packed B frames detected Input #0, avi, from 'sample.avi': Metadata: encoder : VirtualDubMod 1.5.10.2 (build 2540/release) Duration: 00:01:01.81, start: 0.000000, bitrate: 1194 kb/s Stream #0.0: Video: mpeg4, yuv420p, 640x352 [PAR 1:1 DAR 20:11], 23.98 tbr, Stream #0.1: Audio: mp3, 48000 Hz, stereo, s16, 128 kb/s [buffer @ 0x191d1c80] w:640 h:352 pixfmt:yuv420p tb:1/1000000 sar:1/1 sws_param: [scale @ 0x191d6880] w:640 h:352 fmt:yuv420p -> w:1280 h:720 fmt:yuv420p flags:0 [libx264 @ 0x191ce5a0] Default settings detected, using medium profile [libx264 @ 0x191ce5a0] using SAR=45/44 [libx264 @ 0x191ce5a0] using cpu capabilities: MMX2 SSE2Fast SSSE3 FastShuffle S [libx264 @ 0x191ce5a0] profile High, level 3.1 [libx264 @ 0x191ce5a0] 264 - core 118 - H.264/MPEG-4 AVC codec - Copyleft 2003-2 6 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_off 1 open_gop=0 weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_l Output #0, mp4, to 'goodsample.mp4': Metadata: encoder : Lavf52.111.0 Stream #0.0: Video: libx264, yuv420p, 1280x720 [PAR 45:44 DAR 20:11], q=2-31 Stream #0.1: Audio: libmp3lame, 44100 Hz, stereo, s16, 64 kb/s Stream mapping: Stream #0.0 -> #0.0 Stream #0.1 -> #0.1 Press [q] to stop, [?] for help [mp3 @ 0x191d4100] Header missing Error while decoding stream #0.1 [mpeg4 @ 0x191d1dc0] Invalid and inefficient vfw-avi packed B frames detected [mp3 @ 0x191d4100] incomplete frame 9467kB time=00:01:00.32 bitrate=1285.5kbits/ Error while decoding stream #0.1 frame= 1852 fps= 20 q=29.0 Lsize= 9652kB time=00:01:01.72 bitrate=1280.9kbits video:9121kB audio:483kB global headers:0kB muxing overhead 0.499688% frame I:11 Avg QP:16.78 size: 51456 [libx264 @ 0x191ce5a0] frame P:784 Avg QP:20.81 size: 8954 [libx264 @ 0x191ce5a0] frame B:1057 Avg QP:26.06 size: 1659 [libx264 @ 0x191ce5a0] consecutive B-frames: 22.0% 3.1% 7.5% 67.4% [libx264 @ 0x191ce5a0] mb I I16..4: 31.1% 59.8% 9.1% [libx264 @ 0x191ce5a0] mb P I16..4: 1.8% 2.6% 0.2% P16..4: 24.3% 7.0% 4.0 [libx264 @ 0x191ce5a0] mb B I16..4: 0.1% 0.1% 0.0% B16..8: 22.7% 0.8% 0.2 [libx264 @ 0x191ce5a0] 8x8 transform intra:57.0% inter:72.6% [libx264 @ 0x191ce5a0] coded y,uvDC,uvAC intra: 44.4% 33.3% 10.3% inter: 7.6% 5. [libx264 @ 0x191ce5a0] i16 v,h,dc,p: 68% 14% 8% 10% [libx264 @ 0x191ce5a0] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 21% 14% 27% 5% 7% 7% 6 [libx264 @ 0x191ce5a0] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 28% 14% 14% 6% 10% 9% 7 [libx264 @ 0x191ce5a0] i8c dc,h,v,p: 67% 13% 17% 3% [libx264 @ 0x191ce5a0] Weighted P-Frames: Y:1.9% UV:0.4% [libx264 @ 0x191ce5a0] ref P L0: 62.2% 12.8% 10.3% 14.5% 0.2% [libx264 @ 0x191ce5a0] ref B L0: 88.1% 5.5% 6.4% [libx264 @ 0x191ce5a0] ref B L1: 95.7% 4.3% [libx264 @ 0x191ce5a0] kb/s:1209.03 I know there is couple errors tough, but i dont know hot to fix it. Also i would be very thankfull if someone can help reduce video size but is not main problem video weights as original avi but sill.

    Read the article

  • Bandwidth Limit User

    - by user45611
    Hello, i'm saxtor i would like to know how to limit users bandwidth for 10gb per day however i dont want to limit them by ipaddress because if they where to go to an internet cafe the users at the cafe will be restricted with that quota, i need to log them via sockets, example the user request to download a file from http://localhost with there username and password, when they download the file sql will update there bandwidth they used, i have a script here but its not working my buffer doesnt work that rate when a user uses multiple connections thanks for the help!. /** * @author saxtor if you can improve this code email me @saxtorinc.com * @copyright 2010 / /* * CREATE TABLE IF NOT EXISTS max_traffic ( id int(255) NOT NULL AUTO_INCREMENT, limit int(255) NOT NULL, PRIMARY KEY (id) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=0 ; */ //SQL Connection [this is hackable for testing] date_default_timezone_set("America/Guyana"); mysql_connect("localhost", "root", "") or die(mysql_error()); mysql_select_db("Quota") or die(mysql_error()); function quota($id) { $result = mysql_query("SELECT `limit` FROM max_traffic WHERE id='$id' ") or die(error_log(mysql_error()));; $row = mysql_fetch_array($result); return $row[0]; } function update_quota($id,$value) { $result = mysql_query("UPDATE `max_traffic` SET `limit`='$value' WHERE id='$id'") or die(mysql_error()); return $value; } if ( quota(1) != 0) $limit = quota(1); else $limit = 0; $multipart = false; //was a part of the file requested? (partial download) $range = $_SERVER["HTTP_RANGE"]; if ($range) { //pass client Range header to rapidshare // _insert($range); $cookie .= "\r\nRange: $range"; $multipart = true; header("X-UR-RANGE-Range: $range"); } $url = 'http://127.0.0.1/puppy.iso'; $filename = basename($url); //octet-stream + attachment = client always stores file header('Content-type: application/octet-stream'); header('Content-Disposition: attachment; filename="'.$filename.'"'); //always included so clients know this script supports resuming header("Accept-Ranges: bytes"); //awful hack to pass rapidshare the premium cookie $user_agent = ini_get("user_agent"); ini_set("user_agent", $user_agent . "\r\nCookie: enc=$cookie"); $httphandle = fopen($url, "r"); $headers = stream_get_meta_data($httphandle); $size = $headers["wrapper_data"][6]; $sizer = explode(' ',$size); $size = $sizer[1]; //let's check the return header of rapidshare for range / length indicators //we'll just pass these to the client foreach ($headers["wrapper_data"] as $header) { $header = trim($header); if (substr(strtolower($header), 0, strlen("content-range")) == "content-range") { // _insert($range); header($header); header("X-RS-RANGE-" . $header); $multipart = true; //content-range indicates partial download } elseif (substr(strtolower($header), 0, strlen("Content-Length")) == "content-length") { // _insert($range); header($header); header("X-RS-CL-" . $header); } } if ($multipart) header('HTTP/1.1 206 Partial Content'); flush(); $speed = 4128; $packet = 1; //this is private dont touch. $bufsize = 128; //this is private dont touch/ $bandwidth = 0; //this is private dont touch. while (!(connection_aborted() || connection_status() == 1) && $size > 0) { while (!feof($httphandle) && $size > 0) { if ($limit <= 0 ) $size = 0; if ( $size < $bufsize && $size != 0 && $limit != 0) { echo fread($httphandle,$size); $bandwidth += $size; } else { if( $limit != 0) echo fread($httphandle,$bufsize); $bandwidth += $bufsize; } $size -= $bufsize; $limit -= $bufsize; flush(); if ($speed > 0 && ($bandwidth > $speed*$packet*103)) { usleep(100000); $packet++; //update_quota(1,$limit); } error_log(update_quota(1,$limit)); $limit = quota(1); //if( $size <= 0 ) // exit; } fclose($httphandle); } exit; ?

    Read the article

  • Xen DomU on DRBD device: barrier errors

    - by Halfgaar
    I'm testing setting up a Xen DomU with a DRBD storage for easy failover. Most of the time, immediatly after booting the DomU, I get an IO error: [ 3.153370] EXT3-fs (xvda2): using internal journal [ 3.277115] ip_tables: (C) 2000-2006 Netfilter Core Team [ 3.336014] nf_conntrack version 0.5.0 (3899 buckets, 15596 max) [ 3.515604] init: failsafe main process (397) killed by TERM signal [ 3.801589] blkfront: barrier: write xvda2 op failed [ 3.801597] blkfront: xvda2: barrier or flush: disabled [ 3.801611] end_request: I/O error, dev xvda2, sector 52171168 [ 3.801630] end_request: I/O error, dev xvda2, sector 52171168 [ 3.801642] Buffer I/O error on device xvda2, logical block 6521396 [ 3.801652] lost page write due to I/O error on xvda2 [ 3.801755] Aborting journal on device xvda2. [ 3.804415] EXT3-fs (xvda2): error: ext3_journal_start_sb: Detected aborted journal [ 3.804434] EXT3-fs (xvda2): error: remounting filesystem read-only [ 3.814754] journal commit I/O error [ 6.973831] init: udev-fallback-graphics main process (538) terminated with status 1 [ 6.992267] init: plymouth-splash main process (546) terminated with status 1 The manpage of drbdsetup says that LVM (which I use) doesn't support barriers (better known as tagged command queuing or native command queing), so I configured the drbd device not to use barriers. This can be seen in /proc/drbd (by "wo:f, meaning flush, the next method drbd chooses after barrier): 3: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate C r---- ns:2160152 nr:520204 dw:2680344 dr:2678107 al:3549 bm:9183 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0 And on the other host: 3: cs:Connected ro:Secondary/Primary ds:UpToDate/UpToDate C r---- ns:0 nr:2160152 dw:2160152 dr:0 al:0 bm:8052 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0 I also enabled the option disable_sendpage, as per the drbd docs: cat /sys/module/drbd/parameters/disable_sendpage Y I also tried adding barriers=0 to fstab as mount option. Still it sometimes says: [ 58.603896] blkfront: barrier: write xvda2 op failed [ 58.603903] blkfront: xvda2: barrier or flush: disabled I don't even know if ext3 has a nobarrier option. And, because only one of my storage systems is battery backed, it would not be smart anyway. Why does it still compain about barriers when I disabled that? Both host are: Debian: 6.0.4 uname -a: Linux 2.6.32-5-xen-amd64 drbd: 8.3.7 Xen: 4.0.1 Guest: Ubuntu 12.04 LTS uname -a: Linux 3.2.0-24-generic pvops drbd resource: resource drbdvm { meta-disk internal; device /dev/drbd3; startup { # The timeout value when the last known state of the other side was available. 0 means infinite. wfc-timeout 0; # Timeout value when the last known state was disconnected. 0 means infinite. degr-wfc-timeout 180; } syncer { # This is recommended only for low-bandwidth lines, to only send those # blocks which really have changed. #csums-alg md5; # Set to about half your net speed rate 60M; # It seems that this option moved to the 'net' section in drbd 8.4. (later release than Debian has currently) verify-alg md5; } net { # The manpage says this is recommended only in pre-production (because of its performance), to determine # if your LAN card has a TCP checksum offloading bug. #data-integrity-alg md5; } disk { # Detach causes the device to work over-the-network-only after the # underlying disk fails. Detach is not default for historical reasons, but is # recommended by the docs. # However, the Debian defaults in drbd.conf suggest the machine will reboot in that event... on-io-error detach; # LVM doesn't support barriers, so disabling it. It will revert to flush. Check wo: in /proc/drbd. If you don't disable it, you get IO errors. no-disk-barrier; } on host1 { # universe is a VG disk /dev/universe/drbdvm-disk; address 10.0.0.1:7792; } on host2 { # universe is a VG disk /dev/universe/drbdvm-disk; address 10.0.0.2:7792; } } DomU cfg: bootloader = '/usr/lib/xen-default/bin/pygrub' vcpus = '2' memory = '512' # # Disk device(s). # root = '/dev/xvda2 ro' disk = [ 'phy:/dev/drbd3,xvda2,w', 'phy:/dev/universe/drbdvm-swap,xvda1,w', ] # # Hostname # name = 'drbdvm' # # Networking # # fake IP for posting vif = [ 'ip=1.2.3.4,mac=00:16:3E:22:A8:A7' ] # # Behaviour # on_poweroff = 'destroy' on_reboot = 'restart' on_crash = 'restart' In my test setup: the primary host's storage is 9650SE SATA-II RAID PCIe with battery. The secondary is software RAID1. Isn't DRBD+Xen widely used? With these problems, it's not going to work.

    Read the article

  • FFmpeg creates emtpy (black) frames

    - by resamsel
    I have a set of images from a timelapse shot (172 JPG files) that I want to convert into a movie. I tried several parameters with FFmpeg, but all I get is a video with black frames (though it has the expected length). ffmpeg -f image2 -vcodec mjpeg -y -i img_%03d.jpg timelapse2.mpg The command above creates this video: http://sdm-net.org/data/timelapse2.mpg What I'm expecting is something like this (created with Time Lapse Assembler.app): https://vimeo.com/39038362 - This is my fallback option, but I'd really like to create timelapse movies from a script. I'm on OSX Lion (10.7.3) with FFmpeg version (0.10) installed via Homebrew. I also tried to find a proper version of mencoder for OSX, but this doesn't seem to be an easy task. Also, ImageMagick's convert doesn't seem to work nicely, it creates really bad output and it seems there's not much I can do about it... Edit: With libx264 and an mp4 container: ffmpeg -f image2 -y -i img_%03d.jpg -vcodec libx264 timelapse4.mp4 Output: ffmpeg version 0.10 Copyright (c) 2000-2012 the FFmpeg developers built on Mar 26 2012 13:47:02 with clang 3.0 (tags/Apple/clang-211.12) configuration: --prefix=/usr/local/Cellar/ffmpeg/0.10 --enable-shared --enable-gpl --enable-version3 --enable-nonfree --enable-hardcoded-tables --enable-libfreetype --cc=/usr/bin/clang --enable-libx264 --enable-libfaac --enable-libmp3lame --enable-librtmp --enable-libtheora --enable-libvorbis --enable-libvpx --enable-libxvid --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libass --disable-ffplay libavutil 51. 34.101 / 51. 34.101 libavcodec 53. 60.100 / 53. 60.100 libavformat 53. 31.100 / 53. 31.100 libavdevice 53. 4.100 / 53. 4.100 libavfilter 2. 60.100 / 2. 60.100 libswscale 2. 1.100 / 2. 1.100 libswresample 0. 6.100 / 0. 6.100 libpostproc 52. 0.100 / 52. 0.100 Input #0, image2, from 'img_%03d.jpg': Duration: 00:00:06.88, start: 0.000000, bitrate: N/A Stream #0:0: Video: mjpeg, yuvj420p, 3888x2592 [SAR 72:72 DAR 3:2], 25 fps, 25 tbr, 25 tbn, 25 tbc [buffer @ 0x7f8ec9415f20] w:3888 h:2592 pixfmt:yuvj420p tb:1/1000000 sar:72/72 sws_param: [libx264 @ 0x7f8ec981d800] using SAR=1/1 [libx264 @ 0x7f8ec981d800] frame MB size (243x162) > level limit (36864) [libx264 @ 0x7f8ec981d800] MB rate (984150) > level limit (983040) [libx264 @ 0x7f8ec981d800] using cpu capabilities: MMX2 SSE2Fast SSSE3 FastShuffle SSE4.2 AVX [libx264 @ 0x7f8ec981d800] profile High, level 5.1 [libx264 @ 0x7f8ec981d800] 264 - core 120 - H.264/MPEG-4 AVC codec - Copyleft 2003-2011 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=12 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00 Output #0, mp4, to 'timelapse4.mp4': Metadata: encoder : Lavf53.31.100 Stream #0:0: Video: h264 (![0][0][0] / 0x0021), yuvj420p, 3888x2592 [SAR 72:72 DAR 3:2], q=-1--1, 25 tbn, 25 tbc Stream mapping: Stream #0:0 -> #0:0 (mjpeg -> libx264) Press [q] to stop, [?] for help frame= 172 fps= 18 q=-1.0 Lsize= 259kB time=00:00:06.80 bitrate= 312.3kbits/s video:256kB audio:0kB global headers:0kB muxing overhead 1.089647% [libx264 @ 0x7f8ec981d800] frame I:1 Avg QP: 9.60 size:212820 [libx264 @ 0x7f8ec981d800] frame P:43 Avg QP:30.50 size: 291 [libx264 @ 0x7f8ec981d800] frame B:128 Avg QP:31.00 size: 285 [libx264 @ 0x7f8ec981d800] consecutive B-frames: 0.6% 0.0% 1.7% 97.7% [libx264 @ 0x7f8ec981d800] mb I I16..4: 22.5% 77.2% 0.3% [libx264 @ 0x7f8ec981d800] mb P I16..4: 0.0% 0.0% 0.0% P16..4: 0.0% 0.0% 0.0% 0.0% 0.0% skip:100.0% [libx264 @ 0x7f8ec981d800] mb B I16..4: 0.0% 0.0% 0.0% B16..8: 0.0% 0.0% 0.0% direct: 0.0% skip:100.0% L0: 1.2% L1:98.8% BI: 0.0% [libx264 @ 0x7f8ec981d800] 8x8 transform intra:77.2% inter:100.0% [libx264 @ 0x7f8ec981d800] coded y,uvDC,uvAC intra: 41.2% 23.4% 0.6% inter: 0.0% 0.0% 0.0% [libx264 @ 0x7f8ec981d800] i16 v,h,dc,p: 40% 25% 35% 1% [libx264 @ 0x7f8ec981d800] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 36% 32% 30% 1% 0% 0% 0% 0% 0% [libx264 @ 0x7f8ec981d800] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 51% 40% 6% 1% 1% 0% 1% 0% 1% [libx264 @ 0x7f8ec981d800] i8c dc,h,v,p: 60% 21% 19% 0% [libx264 @ 0x7f8ec981d800] Weighted P-Frames: Y:0.0% UV:0.0% [libx264 @ 0x7f8ec981d800] ref P L0: 92.3% 0.0% 0.0% 7.7% [libx264 @ 0x7f8ec981d800] ref B L0: 50.0% 0.0% 50.0% [libx264 @ 0x7f8ec981d800] ref B L1: 99.4% 0.6% [libx264 @ 0x7f8ec981d800] kb/s:304.49 Output timelapse4.mp4 (beacause of spam protection I can only post two links with my reputation): http sdm-net.org/data/timelapse4.mp4

    Read the article

  • Windows XP - Repairing Corrupt System32\Config\System File

    - by SimonTewsi
    My apologies for this long post. I would like to describe the mess I'm in then ask some questions about how to fix it: Starting up my Windows XP SP1 machine I got the following message: Windows could not start because the following file is missing or corrupt: \WINDOWS\SYSTEM32\CONFIG\SYSTEM Tried restarting several times with same results then Googled the problem. Tried the fix described here: http://icrontic.com/articles/repair%5Fwindows%5Fxp (since my CPU does not have XD buffer overflow protection I did not set /NOEXECUTE=OPTIN as OS Load Option). This did not work. I then found another fix for the problem on hardwareanalysis.com: Basically, boot to dos prompt (or recovery console if available) and make backups of the following files:- c:\windows\system32\config\system (to c:\windows\tmp\system.bak) c:\windows\system32\config\software (to c:\windows\tmp\software.bak) c:\windows\system32\config\sam (to c:\windows\tmp\sam.bak) c:\windows\system32\config\security (to c:\windows\tmp\security.bak) c:\windows\system32\config\default (to c:\windows\tmp\default.bak) then delete the above files (not the backups!) then copy the above files in c:\windows\repair to the c:\windows\system32\config directory restart your computer This did work (and I wish I'd done it first, since it was completely reversible, unlike the first method). However, afterwards I found that all the user accounts on the PC were gone. I resurrected them by copying the backed up security file back into the system32\config folder (I may have copied the SAM file from backup as well, I cannot remember clearly now). Now the PC boots up and I can log in. However things are still not right. I tried to alter one of the user accounts and found I could not access the User Accounts in the Control Panel. Microsoft KB 919292 had a fix for the problem. However, the fix failed with a Windows Installer error: The Windows Installer Service could not be accessed. This can occur if you are running Windows in safe mode, or if Windows Installer is not correctly installed. Contact your support personnel for assistance. Windows Installer 3.1 was already installed. I reinstalled it but continued to get the Windows Installer error whenever I tried to run the fix in KB 919292. I have since noticed another three problems: 1) Several applications on the PC no longer run, eg Microsoft Word. Shortcuts no longer seem to do anything and if I run the executables directly (eg for Word by running C:\Program Files\Microsoft Office\Office10\Winword.exe) I get a message similar to: "Microsoft Word has not been installed for the current user. Please run setup to install the application." even though the executable is clearly visible in Windows Explorer (and even though Word actually opens - the error dialog appears after Word has opened. Clicking OK to the error dialog closes Word). 2) One or the other of the two fixes I tried for the original problem caused new user profiles to be created. eg My old user profile under the Documents and Settings folder was Simon. The old one still exists but there is now a new one called Simon.DBQ2515. Obviously the new one is being used because Opera (my browser that still works) no longer sees the bookmarks file under my old profile. 3) Probably as a result of fooling around with the Security file, when I try to boot off the Windows XP CD and run the Recovery Console I am now asked for the administrator password. The only problem is there is no administrator account on the PC. There is one account, LocalAdmin, that has administrative rights but when I entered the password for that account it did not work. It is so long since I originally set up the PC that I cannot remember if the original administrator account ever had a password and, if so, what it was. So, my question is: How can I fix this mess? In particular: 1) Having tried the two fixes linked to above, have I irrepairably damaged the Windows instance, requiring a clean reinstallation of Windows + all applications, or should it be possible to get the machine working correctly again without such drastic measures? 2) Is there any way to get around the administrator password so I can use the Recovery Console again, given that there is no account called "administrator" and the password for the one account with admin privileges does not work (and that, before I started the second fix, I was not asked for an administrator password)? 3) Is there any easy way to fix the problem with the applications that think they are not installed? 4) Is there any easy way to fix the problem of the Windows Installer that does not work, even if reinstalled? Cheers Simon

    Read the article

  • Nginx + PHP-FPM on Centos 6.5 gives me 502 Bad Gateway (fpm error: unable to read what child say: Bad file descriptor)

    - by Latheesan Kanes
    I am setting up a standard LEMP stack. My current setup is giving me the following error: 502 Bad Gateway This is what is currently installed on my server: Here's the configurations I've created/updated so far, can some one take a look at the following and see where the error might be? I've already checked my logs, there's nothing in there (http://i.imgur.com/iRq3ksb.png). And I saw the following in /var/log/php-fpm/error.log file. sidenote: both the nginx and php-fpm has been configured to run under a local account called www-data and the following folders exits on the server nginx.conf global nginx configuration user www-data; worker_processes 6; worker_rlimit_nofile 100000; error_log /var/log/nginx/error.log crit; pid /var/run/nginx.pid; events { worker_connections 2048; use epoll; multi_accept on; } http { include /etc/nginx/mime.types; default_type application/octet-stream; # cache informations about FDs, frequently accessed files can boost performance open_file_cache max=200000 inactive=20s; open_file_cache_valid 30s; open_file_cache_min_uses 2; open_file_cache_errors on; # to boost IO on HDD we can disable access logs access_log off; # copies data between one FD and other from within the kernel # faster then read() + write() sendfile on; # send headers in one peace, its better then sending them one by one tcp_nopush on; # don't buffer data sent, good for small data bursts in real time tcp_nodelay on; # server will close connection after this time keepalive_timeout 60; # number of requests client can make over keep-alive -- for testing keepalive_requests 100000; # allow the server to close connection on non responding client, this will free up memory reset_timedout_connection on; # request timed out -- default 60 client_body_timeout 60; # if client stop responding, free up memory -- default 60 send_timeout 60; # reduce the data that needs to be sent over network gzip on; gzip_min_length 10240; gzip_proxied expired no-cache no-store private auth; gzip_types text/plain text/css text/xml text/javascript application/x-javascript application/xml; gzip_disable "MSIE [1-6]\."; # Load vHosts include /etc/nginx/conf.d/*.conf; } conf.d/www.domain.com.conf my vhost entry ## Nginx php-fpm Upstream upstream wwwdomaincom { server unix:/var/run/php-fcgi-www-data.sock; } ## Global Config client_max_body_size 10M; server_names_hash_bucket_size 64; ## Web Server Config server { ## Server Info listen 80; server_name domain.com *.domain.com; root /home/www-data/public_html; index index.html index.php; ## Error log error_log /home/www-data/logs/nginx-errors.log; ## DocumentRoot setup location / { try_files $uri $uri/ @handler; expires 30d; } ## These locations would be hidden by .htaccess normally #location /app/ { deny all; } ## Disable .htaccess and other hidden files location /. { return 404; } ## Magento uses a common front handler location @handler { rewrite / /index.php; } ## Forward paths like /js/index.php/x.js to relevant handler location ~ .php/ { rewrite ^(.*.php)/ $1 last; } ## Execute PHP scripts location ~ \.php$ { try_files $uri =404; expires off; fastcgi_read_timeout 900; fastcgi_pass wwwdomaincom; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } ## GZip Compression gzip on; gzip_comp_level 8; gzip_min_length 1000; gzip_proxied any; gzip_types text/plain application/xml text/css text/js application/x-javascript; } /etc/php-fpm.d/www-data.conf my php-fpm pool config ## Nginx php-fpm Upstream upstream wwwdomaincom { server unix:/var/run/php-fcgi-www-data.sock; } ## Global Config client_max_body_size 10M; server_names_hash_bucket_size 64; ## Web Server Config server { ## Server Info listen 80; server_name domain.com *.domain.com; root /home/www-data/public_html; index index.html index.php; ## Error log error_log /home/www-data/logs/nginx-errors.log; ## DocumentRoot setup location / { try_files $uri $uri/ @handler; expires 30d; } ## These locations would be hidden by .htaccess normally #location /app/ { deny all; } ## Disable .htaccess and other hidden files location /. { return 404; } ## Magento uses a common front handler location @handler { rewrite / /index.php; } ## Forward paths like /js/index.php/x.js to relevant handler location ~ .php/ { rewrite ^(.*.php)/ $1 last; } ## Execute PHP scripts location ~ \.php$ { try_files $uri =404; expires off; fastcgi_read_timeout 900; fastcgi_pass wwwdomaincom; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } ## GZip Compression gzip on; gzip_comp_level 8; gzip_min_length 1000; gzip_proxied any; gzip_types text/plain application/xml text/css text/js application/x-javascript; } I've got a file in /home/www-data/public_html/index.php with the code <?php phpinfo(); ?> (file uploaded as user www-data).

    Read the article

  • Accessing mySQL from two ports: Problems with iptables

    - by marekventur
    Hi! I'm trying to make my mySQL-server (running on Ubuntu) listen on port 3306 and 110, because I would like to access it from a network with very few open ports. So far I've found this answer telling me to do iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 110 -j REDIRECT --to-port 3306 but all I got is: # mysql -h mydomain.com -P 3306 -u username --password=xyz Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 68863 Server version: 5.0.75-0ubuntu10.5 (Ubuntu) Type 'help;' or '\h' for help. Type '\c' to clear the buffer. mysql> exit Bye # mysql -h mydomain.com -P 110 -u username --password=xyz ERROR 2003 (HY000): Can't connect to MySQL server on 'mydomain.com' (111) I'm not an expert with iptables, so I not sure where to look for the problem. I'm googling around for quite some time, but haven't found anything to help me yet. This is what iptable tells me: # iptables -t nat -L -n -v Chain PREROUTING (policy ACCEPT 32M packets, 1674M bytes) pkts bytes target prot opt in out source destination 0 0 REDIRECT tcp -- eth0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:110 redir ports 3306 Chain POSTROUTING (policy ACCEPT 855K packets, 55M bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 837K packets, 54M bytes) pkts bytes target prot opt in out source destination # iptables -L -n -v Chain INPUT (policy DROP 7 packets, 340 bytes) pkts bytes target prot opt in out source destination 107K 5390K LOG all -- * * 0.0.0.0/0 0.0.0.0/0 state INVALID limit: avg 2/sec burst 5 LOG flags 0 level 4 prefix `INPUT INVALID ' 131K 6614K DROP all -- * * 0.0.0.0/0 0.0.0.0/0 state INVALID 0 0 MY_DROP tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp flags:0x3F/0x00 0 0 MY_DROP tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp flags:0x03/0x03 0 0 MY_DROP tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp flags:0x06/0x06 0 0 MY_DROP tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp flags:0x05/0x05 0 0 MY_DROP tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp flags:0x11/0x01 0 0 MY_DROP tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp flags:0x18/0x08 0 0 MY_DROP tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp flags:0x30/0x20 6948K 12G ACCEPT all -- lo * 0.0.0.0/0 0.0.0.0/0 151M 34G ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED 32M 1666M ACCEPT tcp -- venet0 * 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:80 1833 106K ACCEPT tcp -- venet0 * 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:443 603 29392 ACCEPT tcp -- venet0 * 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:25 1 60 ACCEPT tcp -- venet0 * 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:465 24 1180 ACCEPT tcp -- venet0 * 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:110 1 60 ACCEPT tcp -- venet0 * 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:995 7919 400K ACCEPT tcp -- venet0 * 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:143 1 60 ACCEPT tcp -- venet0 * 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:993 0 0 ACCEPT tcp -- venet0 * 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:119 1 60 ACCEPT tcp -- venet0 * 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:53 7 517 ACCEPT udp -- venet0 * 0.0.0.0/0 0.0.0.0/0 state NEW udp dpt:53 1110 65364 ACCEPT tcp -- venet0 * 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:21 139K 8313K ACCEPT tcp -- venet0 * 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:22 10176 499K ACCEPT tcp -- venet0 * 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:3306 2 80 ACCEPT udp -- venet0 * 0.0.0.0/0 0.0.0.0/0 state NEW udp dpt:123 0 0 ACCEPT tcp -- venet0 * 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:6060 4 176 ACCEPT tcp -- venet0 * 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:6667 20987 1179K MY_REJECT all -- * * 0.0.0.0/0 0.0.0.0/0 Chain FORWARD (policy DROP 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy DROP 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 2159 284K LOG all -- * * 0.0.0.0/0 0.0.0.0/0 state INVALID limit: avg 2/sec burst 5 LOG flags 0 level 4 prefix `OUTPUT INVALID ' 2630 304K DROP all -- * * 0.0.0.0/0 0.0.0.0/0 state INVALID 6948K 12G ACCEPT all -- * lo 0.0.0.0/0 0.0.0.0/0 181M 34G ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state NEW,RELATED,ESTABLISHED 0 0 MY_REJECT all -- * * 0.0.0.0/0 0.0.0.0/0 Chain MY_DROP (7 references) pkts bytes target prot opt in out source destination 0 0 LOG all -- * * 0.0.0.0/0 0.0.0.0/0 limit: avg 2/sec burst 5 LOG flags 0 level 4 prefix `PORTSCAN DROP ' 0 0 DROP all -- * * 0.0.0.0/0 0.0.0.0/0 Chain MY_REJECT (2 references) pkts bytes target prot opt in out source destination 13806 652K LOG tcp -- * * 0.0.0.0/0 0.0.0.0/0 limit: avg 2/sec burst 5 LOG flags 0 level 4 prefix `REJECT TCP ' 18171 830K REJECT tcp -- * * 0.0.0.0/0 0.0.0.0/0 reject-with tcp-reset 912 242K LOG udp -- * * 0.0.0.0/0 0.0.0.0/0 limit: avg 2/sec burst 5 LOG flags 0 level 4 prefix `REJECT UDP ' 912 242K REJECT udp -- * * 0.0.0.0/0 0.0.0.0/0 reject-with icmp-port-unreachable 1904 107K LOG icmp -- * * 0.0.0.0/0 0.0.0.0/0 limit: avg 2/sec burst 5 LOG flags 0 level 4 prefix `DROP ICMP ' 1904 107K DROP icmp -- * * 0.0.0.0/0 0.0.0.0/0 0 0 LOG all -- * * 0.0.0.0/0 0.0.0.0/0 limit: avg 2/sec burst 5 LOG flags 0 level 4 prefix `REJECT OTHER ' 0 0 REJECT all -- * * 0.0.0.0/0 0.0.0.0/0 reject-with icmp-proto-unreachable Is there anyone who can give ma a hint where to look for the problem? Thank you!

    Read the article

  • ASA 5540 v8.4(3) vpn to ASA 5505 v8.2(5), tunnel up but I cant ping from 5505 to IP on other side

    - by user223833
    I am having problems pinging from a 5505(remote) to IP 10.160.70.10 in the network behind the 5540(HQ side). 5505 inside IP: 10.56.0.1 Out: 71.43.109.226 5540 Inside: 10.1.0.8 out: 64.129.214.27 I Can ping from 5540 to 5505 inside 10.56.0.1. I also ran ASDM packet tracer in both directions, it is ok from 5540 to 5505, but drops the packet from 5505 to 5540. It gets through the ACL and dies at the NAT. Here is the 5505 config, I am sure it is something simple I am missing. ASA Version 8.2(5) ! hostname ASA-CITYSOUTHDEPOT domain-name rngint.net names ! interface Ethernet0/0 switchport access vlan 2 ! interface Ethernet0/1 ! interface Ethernet0/2 ! interface Ethernet0/3 ! interface Ethernet0/4 ! interface Ethernet0/5 ! interface Ethernet0/6 ! interface Ethernet0/7 ! interface Vlan1 nameif inside security-level 100 ip address 10.56.0.1 255.255.0.0 ! interface Vlan2 nameif outside security-level 0 ip address 71.43.109.226 255.255.255.252 ! banner motd ***ASA-CITYSOUTHDEPOT*** banner asdm CITY SOUTH DEPOT ASA5505 ftp mode passive clock timezone EST -5 clock summer-time EDT recurring dns server-group DefaultDNS domain-name rngint.net access-list outside_1_cryptomap extended permit ip host 71.43.109.226 host 10.1.0.125 access-list outside_1_cryptomap extended permit ip 10.56.0.0 255.255.0.0 10.0.0.0 255.0.0.0 access-list outside_1_cryptomap extended permit ip 10.56.0.0 255.255.0.0 10.106.70.0 255.255.255.0 access-list outside_1_cryptomap extended permit ip 10.56.0.0 255.255.0.0 10.106.130.0 255.255.255.0 access-list outside_1_cryptomap extended permit ip host 71.43.109.226 host 10.160.70.10 access-list inside_nat0_outbound extended permit ip host 71.43.109.226 host 10.1.0.125 access-list inside_nat0_outbound extended permit ip 10.56.0.0 255.255.0.0 10.0.0.0 255.0.0.0 access-list inside_nat0_outbound extended permit ip 10.56.0.0 255.255.0.0 10.106.130.0 255.255.255.0 access-list inside_nat0_outbound extended permit ip 10.56.0.0 255.255.0.0 10.106.70.0 255.255.255.0 access-list inside_nat0_outbound extended permit ip host 71.43.109.226 10.106.70.0 255.255.255.0 pager lines 24 logging enable logging buffer-size 25000 logging buffered informational logging asdm warnings mtu inside 1500 mtu outside 1500 icmp unreachable rate-limit 1 burst-size 1 icmp permit any inside no asdm history enable arp timeout 14400 global (outside) 1 interface nat (inside) 0 access-list inside_nat0_outbound nat (inside) 1 0.0.0.0 0.0.0.0 route outside 0.0.0.0 0.0.0.0 71.43.109.225 1 timeout xlate 3:00:00 timeout conn 1:00:00 half-closed 0:10:00 udp 0:02:00 icmp 0:00:02 timeout sunrpc 0:10:00 h323 0:05:00 h225 1:00:00 mgcp 0:05:00 mgcp-pat 0:05:00 timeout sip 0:30:00 sip_media 0:02:00 sip-invite 0:03:00 sip-disconnect 0:02:00 timeout sip-provisional-media 0:02:00 uauth 0:05:00 absolute timeout tcp-proxy-reassembly 0:01:00 timeout floating-conn 0:00:00 dynamic-access-policy-record DfltAccessPolicy aaa-server TACACS+ protocol tacacs+ aaa-server TACACS+ (inside) host 10.106.70.36 key ***** aaa authentication http console LOCAL aaa authentication ssh console LOCAL aaa authorization exec authentication-server http server enable http 192.168.1.0 255.255.255.0 inside http 10.0.0.0 255.0.0.0 inside http 0.0.0.0 0.0.0.0 outside snmp-server host inside 10.106.70.7 community ***** no snmp-server location no snmp-server contact snmp-server community ***** snmp-server enable traps snmp authentication linkup linkdown coldstart crypto ipsec transform-set ESP-3DES-SHA esp-3des esp-sha-hmac crypto ipsec transform-set ESP-AES-128-SHA esp-aes esp-sha-hmac crypto ipsec transform-set ESP-AES-128-MD5 esp-aes esp-md5-hmac crypto ipsec transform-set ESP-AES-192-SHA esp-aes-192 esp-sha-hmac crypto ipsec transform-set ESP-AES-192-MD5 esp-aes-192 esp-md5-hmac crypto ipsec transform-set ESP-AES-256-SHA esp-aes-256 esp-sha-hmac crypto ipsec transform-set ESP-AES-256-MD5 esp-aes-256 esp-md5-hmac crypto ipsec transform-set ESP-3DES-MD5 esp-3des esp-md5-hmac crypto ipsec transform-set ESP-DES-SHA esp-des esp-sha-hmac crypto ipsec transform-set ESP-DES-MD5 esp-des esp-md5-hmac crypto ipsec security-association lifetime seconds 28800 crypto ipsec security-association lifetime kilobytes 4608000 crypto map outside_map 1 match address outside_1_cryptomap crypto map outside_map 1 set pfs group1 crypto map outside_map 1 set peer 64.129.214.27 crypto map outside_map 1 set transform-set ESP-3DES-SHA crypto map outside_map interface outside crypto isakmp enable outside crypto isakmp policy 1 authentication pre-share encryption des hash md5 group 2 lifetime 86400 telnet timeout 5 ssh 10.0.0.0 255.0.0.0 inside ssh 0.0.0.0 0.0.0.0 outside ssh timeout 5 console timeout 0 management-access inside dhcpd auto_config outside ! dhcpd address 10.56.0.100-10.56.0.121 inside dhcpd dns 10.1.0.125 interface inside dhcpd auto_config outside interface inside ! dhcprelay server 10.1.0.125 outside dhcprelay enable inside dhcprelay setroute inside dhcprelay timeout 60 threat-detection basic-threat threat-detection statistics access-list no threat-detection statistics tcp-intercept tftp-server inside 10.1.1.25 CITYSOUTHDEPOT-ASA-Confg webvpn tunnel-group 64.129.214.27 type ipsec-l2l tunnel-group 64.129.214.27 ipsec-attributes pre-shared-key ***** ! ! prompt hostname context

    Read the article

  • How I can export a datatable to MS word 2007, excel 2007,csv from asp.net?

    - by bala3569
    Hi, I am using the below code to Export DataTable to MS Word,Excel,CSV format & it's working fine. But problem is that this code export to MS Word 2003,Excel 2003 version. I need to Export my DataTable to Word 2007,Excel 2007,CSV because I am supposed to handle more than 100,000 records at a time and as we know Excel 2003 supports for only 65,000 records. Please help me out if you know that how to export DataTable or DataSet to MS Word 2007,Excel 2007. public static void Convertword(DataTable dt, HttpResponse Response,string filename) { Response.Clear(); Response.AddHeader("content-disposition", "attachment;filename=" + filename + ".doc"); Response.Charset = ""; Response.Cache.SetCacheability(HttpCacheability.NoCache); Response.ContentType = "application/vnd.word"; System.IO.StringWriter stringWrite = new System.IO.StringWriter(); System.Web.UI.HtmlTextWriter htmlWrite = new System.Web.UI.HtmlTextWriter(stringWrite); System.Web.UI.WebControls.GridView dg = new System.Web.UI.WebControls.GridView(); dg.DataSource = dt; dg.DataBind(); dg.RenderControl(htmlWrite); Response.Write(stringWrite.ToString()); Response.End(); //HttpContext.Current.ApplicationInstance.CompleteRequest(); } public static void Convertexcel(DataTable dt, HttpResponse Response, string filename) { Response.Clear(); Response.AddHeader("content-disposition", "attachment;filename=" + filename + ".xls"); Response.Charset = ""; Response.Cache.SetCacheability(HttpCacheability.NoCache); Response.ContentType = "application/vnd.ms-excel"; System.IO.StringWriter stringWrite = new System.IO.StringWriter(); System.Web.UI.HtmlTextWriter htmlWrite = new System.Web.UI.HtmlTextWriter(stringWrite); System.Web.UI.WebControls.DataGrid dg = new System.Web.UI.WebControls.DataGrid(); dg.DataSource = dt; dg.DataBind(); dg.RenderControl(htmlWrite); Response.Write(stringWrite.ToString()); Response.End(); //HttpContext.Current.ApplicationInstance.CompleteRequest(); } public static void ConvertCSV(DataTable dataTable, HttpResponse Response, string filename) { Response.Clear(); Response.Buffer = true; Response.AddHeader("content-disposition", "attachment;filename=" + filename + ".csv"); Response.Charset = ""; Response.Cache.SetCacheability(HttpCacheability.NoCache); Response.ContentType = "Application/x-msexcel"; StringBuilder sb = new StringBuilder(); if (dataTable.Columns.Count != 0) { foreach (DataColumn column in dataTable.Columns) { sb.Append(column.ColumnName + ','); } sb.Append("\r\n"); foreach (DataRow row in dataTable.Rows) { foreach (DataColumn column in dataTable.Columns) { if(row[column].ToString().Contains(',')==true) { row[column] = row[column].ToString().Replace(",", ""); } sb.Append(row[column].ToString() + ','); } sb.Append("\r\n"); } } Response.Write(sb.ToString()); Response.End(); //HttpContext.Current.ApplicationInstance.CompleteRequest(); }

    Read the article

  • Linker error when compiling boost.asio example

    - by Alon
    Hi, I'm trying to learn a little bit C++ and Boost.Asio. I'm trying to compile the following code example: #include <iostream> #include <boost/array.hpp> #include <boost/asio.hpp> using boost::asio::ip::tcp; int main(int argc, char* argv[]) { try { if (argc != 2) { std::cerr << "Usage: client <host>" << std::endl; return 1; } boost::asio::io_service io_service; tcp::resolver resolver(io_service); tcp::resolver::query query(argv[1], "daytime"); tcp::resolver::iterator endpoint_iterator = resolver.resolve(query); tcp::resolver::iterator end; tcp::socket socket(io_service); boost::system::error_code error = boost::asio::error::host_not_found; while (error && endpoint_iterator != end) { socket.close(); socket.connect(*endpoint_iterator++, error); } if (error) throw boost::system::system_error(error); for (;;) { boost::array<char, 128> buf; boost::system::error_code error; size_t len = socket.read_some(boost::asio::buffer(buf), error); if (error == boost::asio::error::eof) break; // Connection closed cleanly by peer. else if (error) throw boost::system::system_error(error); // Some other error. std::cout.write(buf.data(), len); } } catch (std::exception& e) { std::cerr << e.what() << std::endl; } return 0; } With the following command line: g++ -I /usr/local/boost_1_42_0 a.cpp and it throws an unclear error: /tmp/ccCv9ZJA.o: In function `__static_initialization_and_destruction_0(int, int)': a.cpp:(.text+0x654): undefined reference to `boost::system::get_system_category()' a.cpp:(.text+0x65e): undefined reference to `boost::system::get_generic_category()' a.cpp:(.text+0x668): undefined reference to `boost::system::get_generic_category()' a.cpp:(.text+0x672): undefined reference to `boost::system::get_generic_category()' a.cpp:(.text+0x67c): undefined reference to `boost::system::get_system_category()' /tmp/ccCv9ZJA.o: In function `boost::system::error_code::error_code()': a.cpp:(.text._ZN5boost6system10error_codeC2Ev[_ZN5boost6system10error_codeC5Ev]+0x10): undefined reference to `boost::system::get_system_category()' /tmp/ccCv9ZJA.o: In function `boost::asio::error::get_system_category()': a.cpp:(.text._ZN5boost4asio5error19get_system_categoryEv[boost::asio::error::get_system_category()]+0x7): undefined reference to `boost::system::get_system_category()' /tmp/ccCv9ZJA.o: In function `boost::asio::detail::posix_thread::~posix_thread()': a.cpp:(.text._ZN5boost4asio6detail12posix_threadD2Ev[_ZN5boost4asio6detail12posix_threadD5Ev]+0x1d): undefined reference to `pthread_detach' /tmp/ccCv9ZJA.o: In function `boost::asio::detail::posix_thread::join()': a.cpp:(.text._ZN5boost4asio6detail12posix_thread4joinEv[boost::asio::detail::posix_thread::join()]+0x25): undefined reference to `pthread_join' /tmp/ccCv9ZJA.o: In function `boost::asio::detail::posix_tss_ptr<boost::asio::detail::call_stack<boost::asio::detail::task_io_service<boost::asio::detail::epoll_reactor<false> > >::context>::~posix_tss_ptr()': a.cpp:(.text._ZN5boost4asio6detail13posix_tss_ptrINS1_10call_stackINS1_15task_io_serviceINS1_13epoll_reactorILb0EEEEEE7contextEED2Ev[_ZN5boost4asio6detail13posix_tss_ptrINS1_10call_stackINS1_15task_io_serviceINS1_13epoll_reactorILb0EEEEEE7contextEED5Ev]+0xf): undefined reference to `pthread_key_delete' /tmp/ccCv9ZJA.o: In function `boost::asio::detail::posix_tss_ptr<boost::asio::detail::call_stack<boost::asio::detail::task_io_service<boost::asio::detail::epoll_reactor<false> > >::context>::posix_tss_ptr()': a.cpp:(.text._ZN5boost4asio6detail13posix_tss_ptrINS1_10call_stackINS1_15task_io_serviceINS1_13epoll_reactorILb0EEEEEE7contextEEC2Ev[_ZN5boost4asio6detail13posix_tss_ptrINS1_10call_stackINS1_15task_io_serviceINS1_13epoll_reactorILb0EEEEEE7contextEEC5Ev]+0x22): undefined reference to `pthread_key_create' collect2: ld returned 1 exit status How can I fix it? Thank you.

    Read the article

  • Problem receving in RXTX

    - by drhorrible
    I've been using RXTX for about a year now, without too many problems. I just started a new program to interact with a new piece of hardware, so I reused the connect() method I've used on my other projects, but I have a weird problem I've never seen before. The Problem The device works fine, because when I connect with hyperterminal, I send things and receive what I expect, and Serial Port Monitor(SPM) reflects this. However, when I run the simple hyperterminal-clone I wrote to diagnose the problem I'm having with my main app, bytes are sent, according to SPM, but nothing is received, and my SerialPortEventListener never fires. Even when I check for available data in the main loop, reader.ready() returns false. If I ignore this check, then I get an exception, details below. Relevant section of connect() method // Configure and open port port = (SerialPort) CommPortIdentifier.getPortIdentifier(name) .open(owner,1000) port.setSerialPortParams(baud, databits, stopbits, parity); port.setFlowControlMode(fc_mode); final BufferedReader br = new BufferedReader( new InputStreamReader( port.getInputStream(), "US-ASCII")); // Add listener to print received characters to screen port.addEventListener(new SerialPortEventListener(){ public void serialEvent(SerialPortEvent ev) { try { System.out.println("Received: "+br.readLine()); } catch (IOException e) { e.printStackTrace(); } } }); port.notifyOnDataAvailable(); Exception java.io.IOException: Underlying input stream returned zero bytes at sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:268) at sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:306) at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:158) at java.io.InputStreamReader.read(InputStreamReader.java:167) at java.io.BufferedReader.fill(BufferedReader.java:136) at java.io.BufferedReader.read(BufferedReader.java:157) at <my code> The big question (again) I think I've eliminated all possible hardware problems, so what could be wrong with my code, or the RXTX library? Edit: something interesting When I open hyperterminal after sending a bunch of commands from java that should have gotten responses, all of the responses appear immediately, as if they had been put in the buffer somewhere, but unavailable. Edit 2: Tried something new, same results I ran the code example found here, with the same results. No data came in, but when I switched to a new program, it came all at once. Edit 3 The hardware is fine, and even a different computer has the same problem. I am not using any sort of USB adapter. I've started using PortMon, too, and it's giving me some interesting results. Hyperterminal and RXTX are not using the same settings, and RXTX always polls the port, unlike HyperTerminal, but I still can't see what settings would affect this. As soon as I can isolate the configuration from the constant polling, I'll post my PortMon logs. Edit 4 Is it possible that some sort of Windows update in the last 3 months could have caused this? It has screwed up one of my MATLAB mex-based programs once. Edit 5 I've also noticed some things that are different between HyperTerminal, RXTX, and a separate program I found that communicates with the device (but doesn't do what I want, which is why I'm rolling my own program) HyperTerminal - set to no flow control, but Serial Port Monitor's RTS and DTR indicators are green Other program - not sure what settings it thinks it's using, but only SPM's RTS indicator is green RXTX - no matter what flow control I set, only SPM's CTS and DTR indicators are on. From Serial Port Monitor's help files (paraphrased): the indicators display the state of the serial control lines RTS - Request To Send CTS - Clear To Send DTR - Data Terminal Ready

    Read the article

  • Reciving UDP packets on iPhone

    - by Eli
    I'm trying to establish UDP communication between a MAC OS and an iPod through Wi-Fi, at this point I'm able to send packets from the iPod and I can see those packets have the right MAC and ip addresses (I'm using wireshark to monitor the network) but the MAC receives the packets only when the wireshark is on, otherwise recvfrom() returns -1. When I try to transmit from MAC to iPhone I have the same result, I can see the packets are sent but the iPhone doesn't seem to get them. I'm using the next code to send: struct addrinfo hints; int rv; memset(&hints, 0, sizeof hints); hints.ai_family = AF_UNSPEC; hints.ai_socktype = SOCK_DGRAM; if ((rv = getaddrinfo(IP, SERVERPORT, &hints, &servinfo)) != 0) { fprintf(stderr, "getaddrinfo: %s\n", gai_strerror(rv)); return 1; } // loop through all the results and make a socket for(p = servinfo; p != NULL; p = p->ai_next) { if ((sockfd = socket(p->ai_family, p->ai_socktype, p->ai_protocol)) == -1) { perror("talker: socket"); continue; } break; } if (p == NULL) { fprintf(stderr, "talker: failed to bind socket\n"); return 2; } while (cond) sntBytes += sendto(sockfd, message, strlen(message), 0, p->ai_addr, p->ai_addrlen); return 0; and this code to receive: struct addrinfo hints, *p; int rv; memset(&hints, 0, sizeof hints); hints.ai_family = AF_UNSPEC; // set hints.ai_socktype = SOCK_DGRAM; hints.ai_flags = AI_PASSIVE; // use to AF_INET to force IPv4 my IP if ((rv = getaddrinfo(NULL, MYPORT, &hints, &servinfo)) != 0) { fprintf(stderr, "getaddrinfo: %s\n", gai_strerror(rv)); return 1; } // loop through all the results and bind to the first we can for(p = servinfo; p != NULL; p = p->ai_next) { if ((sockfd = socket(p->ai_family, p->ai_socktype, p->ai_protocol)) == -1) { perror("listener: socket"); continue; } if (bind(sockfd, p->ai_addr, p->ai_addrlen) == -1) { close(sockfd); perror("listener: bind"); continue; } break; } if (p == NULL) { fprintf(stderr, "listener: failed to bind socket\n"); return 2; } addr_len = sizeof their_addr; fcntl(sockfd, F_SETFL,O_NONBLOCK); int rcvbuf_size = 128 * 1024; // That's 128Kb of buffer space. setsockopt(sockfd, SOL_SOCKET, SO_RCVBUF, &rcvbuf_size, sizeof(rcvbuf_size)); printf("listener: waiting to recvfrom...\n"); while (cond) rcvBytes = recvfrom(sockfd, buf, MAXBUFLEN-1 , 0, (struct sockaddr *)&their_addr, &addr_len); return 0; What am I missing?

    Read the article

  • C++ custom exceptions: run time performance and passing exceptions from C++ to C

    - by skyeagle
    I am writing a custom C++ exception class (so I can pass exceptions occuring in C++ to another language via a C API). My initial plan of attack was to proceed as follows: //C++ myClass { public: myClass(); ~myClass(); void foo() // throws myException int foo(const int i, const bool b) // throws myException } * myClassPtr; // C API #ifdef __cplusplus extern "C" { #endif myClassPtr MyClass_New(); void MyClass_Destroy(myClassPtr p); void MyClass_Foo(myClassPtr p); int MyClass_FooBar(myClassPtr p, int i, bool b); #ifdef __cplusplus }; #endif I need a way to be able to pass exceptions thrown in the C++ code to the C side. The information I want to pass to the C side is the following: (a). What (b). Where (c). Simple Stack Trace (just the sequence of error messages in order they occured, no debugging info etc) I want to modify my C API, so that the API functions take a pointer to a struct ExceptionInfo, which will contain any exception info (if an exception occured) before consuming the results of the invocation. This raises two questions: Question 1 1. Implementation of each of the C++ methods exposed in the C API needs to be enclosed in a try/catch statement. The performance implications for this seem quite serious (according to this article): "It is a mistake (with high runtime cost) to use C++ exception handling for events that occur frequently, or for events that are handled near the point of detection." At the same time, I remember reading somewhere in my C++ days, that all though exception handling is expensive, it only becmes expensive when an exception actually occurs. So, which is correct?. what to do?. Is there an alternative way that I can trap errors safely and pass the resulting error info to the C API?. Or is this a minor consideration (the article after all, is quite old, and hardware have improved a bit since then). Question 2 I wuld like to modify the exception class given in that article, so that it contains a simple stack trace, and I need some help doing that. Again, in order to make the exception class 'lightweight', I think its a good idea not to include any STL classes, like string or vector (good idea/bad idea?). Which potentially leaves me with a fixed length C string (char*) which will be stack allocated. So I can maybe just keep appending messages (delimted by a unique separator [up to maximum length of buffer])... Its been a while since I did any serious C++ coding, and I will be grateful for the help. BTW, this is what I have come up with so far (I am intentionally, not deriving from std::exception because of the performance reasons mentioned in the article, and I am instead, throwing an integral exception (based on an exception enumeration): class fast_exception { public: fast_exception(int what, char const* file=0, int line=0) : what_(what), line_(line), file_(file) {/*empty*/} int what() const { return what_; } int line() const { return line_; } char const* file() const { return file_; } private: int what_; int line_; char const[MAX_BUFFER_SIZE] file_; }

    Read the article

  • Jumbled byte array after using TcpClient and TcpListener

    - by Dylan
    I want to use the TcpClient and TcpListener to send an mp3 file over a network. I implemented a solution of this using sockets, but there were some issues so I am investigating a new/better way to send a file. I create a byte array which looks like this: length_of_filename|filename|file This should then be transmitted using the above mentioned classes, yet on the server side the byte array I read is completely messed up and I'm not sure why. The method I use to send: public static void Send(String filePath) { try { IPEndPoint endPoint = new IPEndPoint(Settings.IpAddress, Settings.Port + 1); Byte[] fileData = File.ReadAllBytes(filePath); FileInfo fi = new FileInfo(filePath); List<byte> dataToSend = new List<byte>(); dataToSend.AddRange(BitConverter.GetBytes(Encoding.Unicode.GetByteCount(fi.Name))); // length of filename dataToSend.AddRange(Encoding.Unicode.GetBytes(fi.Name)); // filename dataToSend.AddRange(fileData); // file binary data using (TcpClient client = new TcpClient()) { client.Connect(Settings.IpAddress, Settings.Port + 1); // Get a client stream for reading and writing. using (NetworkStream stream = client.GetStream()) { // server is ready stream.Write(dataToSend.ToArray(), 0, dataToSend.ToArray().Length); } } } catch (ArgumentNullException e) { Debug.WriteLine(e); } catch (SocketException e) { Debug.WriteLine(e); } } } Then on the server side it looks as follows: private void Listen() { TcpListener server = null; try { // Setup the TcpListener Int32 port = Settings.Port + 1; IPAddress localAddr = IPAddress.Parse("127.0.0.1"); // TcpListener server = new TcpListener(port); server = new TcpListener(localAddr, port); // Start listening for client requests. server.Start(); // Buffer for reading data Byte[] bytes = new Byte[1024]; List<byte> data; // Enter the listening loop. while (true) { Debug.WriteLine("Waiting for a connection... "); string filePath = string.Empty; // Perform a blocking call to accept requests. // You could also user server.AcceptSocket() here. using (TcpClient client = server.AcceptTcpClient()) { Debug.WriteLine("Connected to client!"); data = new List<byte>(); // Get a stream object for reading and writing using (NetworkStream stream = client.GetStream()) { // Loop to receive all the data sent by the client. while ((stream.Read(bytes, 0, bytes.Length)) != 0) { data.AddRange(bytes); } } } int fileNameLength = BitConverter.ToInt32(data.ToArray(), 0); filePath = Encoding.Unicode.GetString(data.ToArray(), 4, fileNameLength); var binary = data.GetRange(4 + fileNameLength, data.Count - 4 - fileNameLength); Debug.WriteLine("File successfully downloaded!"); // write it to disk using (BinaryWriter writer = new BinaryWriter(File.Open(filePath, FileMode.Append))) { writer.Write(binary.ToArray(), 0, binary.Count); } } } catch (Exception ex) { Debug.WriteLine(ex); } finally { // Stop listening for new clients. server.Stop(); } } Can anyone see something that I am missing/doing wrong?

    Read the article

  • Real-time graphing in Java

    - by thodinc
    I have an application which updates a variable about between 5 to 50 times a second and I am looking for some way of drawing a continuous XY plot of this change in real-time. Though JFreeChart is not recommended for such a high update rate, many users still say that it works for them. I've tried using this demo and modified it to display a random variable, but it seems to use up 100% CPU usage all the time. Even if I ignore that, I do not want to be restricted to JFreeChart's ui class for constructing forms (though I'm not sure what its capabilities are exactly). Would it be possible to integrate it with Java's "forms" and drop-down menus? (as are available in VB) Otherwise, are there any alternatives I could look into? EDIT: I'm new to Swing, so I've put together a code just to test the functionality of JFreeChart with it (while avoiding the use of the ApplicationFrame class of JFree since I'm not sure how that will work with Swing's combo boxes and buttons). Right now, the graph is being updated immediately and CPU usage is high. Would it be possible to buffer the value with new Millisecond() and update it maybe twice a second? Also, can I add other components to the rest of the JFrame without disrupting JFreeChart? How would I do that? frame.getContentPane().add(new Button("Click")) seems to overwrite the graph. package graphtest; import java.util.Random; import javax.swing.JFrame; import org.jfree.chart.ChartFactory; import org.jfree.chart.ChartPanel; import org.jfree.chart.JFreeChart; import org.jfree.chart.axis.ValueAxis; import org.jfree.chart.plot.XYPlot; import org.jfree.data.time.Millisecond; import org.jfree.data.time.TimeSeries; import org.jfree.data.time.TimeSeriesCollection; public class Main { static TimeSeries ts = new TimeSeries("data", Millisecond.class); public static void main(String[] args) throws InterruptedException { gen myGen = new gen(); new Thread(myGen).start(); TimeSeriesCollection dataset = new TimeSeriesCollection(ts); JFreeChart chart = ChartFactory.createTimeSeriesChart( "GraphTest", "Time", "Value", dataset, true, true, false ); final XYPlot plot = chart.getXYPlot(); ValueAxis axis = plot.getDomainAxis(); axis.setAutoRange(true); axis.setFixedAutoRange(60000.0); JFrame frame = new JFrame("GraphTest"); frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); ChartPanel label = new ChartPanel(chart); frame.getContentPane().add(label); //Suppose I add combo boxes and buttons here later frame.pack(); frame.setVisible(true); } static class gen implements Runnable { private Random randGen = new Random(); public void run() { while(true) { int num = randGen.nextInt(1000); System.out.println(num); ts.addOrUpdate(new Millisecond(), num); try { Thread.sleep(20); } catch (InterruptedException ex) { System.out.println(ex); } } } } }

    Read the article

  • How to get IMediaControl.Run() to start a file playing with no delay

    - by MusiGenesis
    I am attempting to use DirectShow to play two AVI files consecutively (one after the other) so that there is no interruption in the audio or video when the player transitions from one file to the next. I have two custom controls on my form. Each one is pre-loaded with an AVI file, and before playback begins I set up all the DirectShow interfaces, set the video windows and resize them, call IMediaControl.Run(), then IMediaControl.Pause(), then IMediaSeeking.SetPositions to reset to frame 0, on both controls. On the form, you can see that both files are paused at their initial frames. I then call IMediaControl.Run() on the first control, and wait for it to complete before calling Run() on the second control. Initially, I hooked into the first video's EC_COMPLETE notification message, and used this to start the second. Thinking that this event might be slow to arrive (turns out it is, but for a weird reason), I tried two other approaches: Check the first video's current position inside a timer that goes off every second or so (using IMediaPosition.get_CurrentPosition). When the current position is within a second of the video's stop time (known in advance from IMediaPosition.get_StopTime), I go into a tight while loop and wait for the current position to equal the stop time, and then call Run() on the second video. Same as the first, except I replace the while loop with a call to timeSetEvent from winmm.dll, with a delay set so that it fires right when the first file is supposed to end. I use the callback to Run() the second file. Either of these two methods substantially cuts down the delay between the end of the first file and the beginning of the second, indicating that the EC_COMPLETE message doesn't arrive immediately after the file is complete (I also tried hooking the EC_SEGMENT_COMPLETE message, which is supposed to be used for looping within a file, but apparently nobody supports this - it never occurs on my machine, at least). Doing all of the above has cut the transition delay from as much as a second, down to a barely perceptible glitch; about a third of the time the files transition with no interruption at all, which suggests there's no fundamental reason I can't get this to work all the time. The slight delay is still unacceptable, unfortunately. I assume (and I could easily be wrong) that the remaining delay is due to a slight variable delay between the call to IMediaControl.Run() and when the video actually starts playing. Does anybody know anything I can do to eliminate this little lag? It would also help to be told this is fundamentally impossible for whatever reason, which wouldn't surprise me. I've never encountered a video player in Windows that doesn't have this problem, so it may not be doable. More info: the AVI files I'm playing are completely uncompressed (video and audio are uncompressed), so I don't think the lag is due to DirectShow's having to uncompress the video ahead of play start, although it may still buffer ahead as matter of course (and this may be the source of the problem). I would have though that starting play, pausing and then rewinding to the beginning would fix this. Also, the way I'm handling the transition is to actually have the second control underneath the first; when the first completes playing, I start the second and then call BringToFront on it, creating the appearance of a single video transitioning between the two originals. I don't think the glitch is due to this, because it works perfectly some of the time, and even if this were problematic, it wouldn't explain the matching audio glitch. Even more: I just tried starting the second video 30-50 milliseconds "early" and that seemed to eliminate even more of the gap, so I'm guessing that the lag in Run() is about that long. It appears to be variable, though, so this is still not where I need it to be.

    Read the article

  • Neo4j OutOfMemory problem

    - by Edward83
    Hi! This is my source code of Main.java. It was grabbed from neo4j-apoc-1.0 examples. The goal of modification to store 1M records of 2 nodes and 1 relation: package javaapplication2; import org.neo4j.graphdb.GraphDatabaseService; import org.neo4j.graphdb.Node; import org.neo4j.graphdb.RelationshipType; import org.neo4j.graphdb.Transaction; import org.neo4j.kernel.EmbeddedGraphDatabase; public class Main { private static final String DB_PATH = "neo4j-store-1M"; private static final String NAME_KEY = "name"; private static enum ExampleRelationshipTypes implements RelationshipType { EXAMPLE } public static void main(String[] args) { GraphDatabaseService graphDb = null; try { System.out.println( "Init database..." ); graphDb = new EmbeddedGraphDatabase( DB_PATH ); registerShutdownHook( graphDb ); System.out.println( "Start of creating database..." ); int valIndex = 0; for(int i=0; i<1000; ++i) { for(int j=0; j<1000; ++j) { Transaction tx = graphDb.beginTx(); try { Node firstNode = graphDb.createNode(); firstNode.setProperty( NAME_KEY, "Hello" + valIndex ); Node secondNode = graphDb.createNode(); secondNode.setProperty( NAME_KEY, "World" + valIndex ); firstNode.createRelationshipTo( secondNode, ExampleRelationshipTypes.EXAMPLE ); tx.success(); ++valIndex; } finally { tx.finish(); } } } System.out.println("Ok, client processing finished!"); } finally { System.out.println( "Shutting down database ..." ); graphDb.shutdown(); } } private static void registerShutdownHook( final GraphDatabaseService graphDb ) { // Registers a shutdown hook for the Neo4j instance so that it // shuts down nicely when the VM exits (even if you "Ctrl-C" the // running example before it's completed) Runtime.getRuntime().addShutdownHook( new Thread() { @Override public void run() { graphDb.shutdown(); } } ); } } After a few iterations (around 150K) I got error message: "java.lang.OutOfMemoryError: Java heap space at java.nio.HeapByteBuffer.(HeapByteBuffer.java:39) at java.nio.ByteBuffer.allocate(ByteBuffer.java:312) at org.neo4j.kernel.impl.nioneo.store.PlainPersistenceWindow.(PlainPersistenceWindow.java:30) at org.neo4j.kernel.impl.nioneo.store.PersistenceWindowPool.allocateNewWindow(PersistenceWindowPool.java:534) at org.neo4j.kernel.impl.nioneo.store.PersistenceWindowPool.refreshBricks(PersistenceWindowPool.java:430) at org.neo4j.kernel.impl.nioneo.store.PersistenceWindowPool.acquire(PersistenceWindowPool.java:122) at org.neo4j.kernel.impl.nioneo.store.CommonAbstractStore.acquireWindow(CommonAbstractStore.java:459) at org.neo4j.kernel.impl.nioneo.store.AbstractDynamicStore.updateRecord(AbstractDynamicStore.java:240) at org.neo4j.kernel.impl.nioneo.store.PropertyStore.updateRecord(PropertyStore.java:209) at org.neo4j.kernel.impl.nioneo.xa.Command$PropertyCommand.execute(Command.java:513) at org.neo4j.kernel.impl.nioneo.xa.NeoTransaction.doCommit(NeoTransaction.java:443) at org.neo4j.kernel.impl.transaction.xaframework.XaTransaction.commit(XaTransaction.java:316) at org.neo4j.kernel.impl.transaction.xaframework.XaResourceManager.commit(XaResourceManager.java:399) at org.neo4j.kernel.impl.transaction.xaframework.XaResourceHelpImpl.commit(XaResourceHelpImpl.java:64) at org.neo4j.kernel.impl.transaction.TransactionImpl.doCommit(TransactionImpl.java:514) at org.neo4j.kernel.impl.transaction.TxManager.commit(TxManager.java:571) at org.neo4j.kernel.impl.transaction.TxManager.commit(TxManager.java:543) at org.neo4j.kernel.impl.transaction.TransactionImpl.commit(TransactionImpl.java:102) at org.neo4j.kernel.EmbeddedGraphDbImpl$TransactionImpl.finish(EmbeddedGraphDbImpl.java:329) at javaapplication2.Main.main(Main.java:62) 28.05.2010 9:52:14 org.neo4j.kernel.impl.nioneo.store.PersistenceWindowPool logWarn WARNING: [neo4j-store-1M\neostore.propertystore.db.strings] Unable to allocate direct buffer" Guys! Help me plzzz, what I did wrong, how can I repair it? Tested on platform Windows XP 32bit SP3. Maybe solution within creation custom configuration? thnx 4 every advice!

    Read the article

< Previous Page | 142 143 144 145 146 147 148 149 150 151  | Next Page >