Search Results

Search found 14693 results on 588 pages for 'azure storage tables'.

Page 577/588 | < Previous Page | 573 574 575 576 577 578 579 580 581 582 583 584  | Next Page >

  • DNS server not functioning correctly

    - by Shamit Shrestha
    I have setup a DNS server which isnt working properly. My domain is accswift.com which has glued to two name servers ns1.accswift.com and ns2.accswift.com for the same IP address - 203.78.164.18. On domain end everything should be fine. Please check -http://www.intodns.com/accswift.com I am sure its the problem with the linux server. Can anyone help me find where the problem is for me? Below is the settings that I have in the server. ====================== DIG [root@accswift ~]# dig accswift.com ; << DiG 9.8.2rc1-RedHat-9.8.2-0.17.rc1.el6_4.6 << accswift.com ;; global options: +cmd ;; Got answer: ;; -HEADER<<- opcode: QUERY, status: NOERROR, id: 11275 ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 2, ADDITIONAL: 2 ;; QUESTION SECTION: ;accswift.com. IN A ;; ANSWER SECTION: accswift.com. 38400 IN A 203.78.164.18 ;; AUTHORITY SECTION: accswift.com. 38400 IN NS ns1.accswift.com. accswift.com. 38400 IN NS ns2.accswift.com. ;; ADDITIONAL SECTION: ns1.accswift.com. 38400 IN A 203.78.164.18 ns2.accswift.com. 38400 IN A 203.78.164.18 ;; Query time: 1 msec ;; SERVER: 127.0.0.1#53(127.0.0.1) ;; WHEN: Wed Nov 6 20:12:16 2013 ;; MSG SIZE rcvd: 114 ============== IP Tables settings vi /etc/sysconfig/iptables *filter :FORWARD ACCEPT [0:0] :INPUT ACCEPT [0:0] :OUTPUT ACCEPT [0:0] -A FORWARD -o eth0 -j LOG --log-level 7 --log-prefix BANDWIDTH_OUT: -A FORWARD -i eth0 -j LOG --log-level 7 --log-prefix BANDWIDTH_IN: -A OUTPUT -o eth0 -j LOG --log-level 7 --log-prefix BANDWIDTH_OUT: -A INPUT -i eth0 -j LOG --log-level 7 --log-prefix BANDWIDTH_IN: -A INPUT -p udp -m udp --sport 53 -j ACCEPT -A OUTPUT -p udp -m udp --dport 53 -j ACCEPT COMMIT Completed on Fri Sep 20 04:20:33 2013 Generated by webmin *mangle :FORWARD ACCEPT [0:0] :INPUT ACCEPT [0:0] :OUTPUT ACCEPT [0:0] :PREROUTING ACCEPT [0:0] :POSTROUTING ACCEPT [0:0] COMMIT Completed Generated by webmin *nat :OUTPUT ACCEPT [0:0] :PREROUTING ACCEPT [0:0] :POSTROUTING ACCEPT [0:0] COMMIT ====DNS settings vi /var/named/accswift.com.host $ttl 38400 @ IN SOA ns1.accswift.com. root.ns1.accswift.com. ( 1382936091 10800 3600 604800 38400 ) @ IN NS ns1.accswift.com. @ IN NS ns2.accswift.com. accswift.com. IN A 203.78.164.18 accswift.com. IN NS ns1.accswift.com. www.accswift.com. IN A 203.78.164.18 ftp.accswift.com. IN A 203.78.164.18 m.accswift.com. IN A 203.78.164.18 ns1 IN A 203.78.164.18 ns2 IN A 203.78.164.18 localhost.accswift.com. IN A 127.0.0.1 webmail.accswift.com. IN A 203.78.164.18 admin.accswift.com. IN A 203.78.164.18 mail.accswift.com. IN A 203.78.164.18 accswift.com. IN MX 5 mail.accswift.com. ====Named.conf vi /etc/named.conf options { listen-on port 53 { 127.0.0.1; }; listen-on-v6 port 53 { ::1; }; directory "/var/named"; dump-file "/var/named/data/cache_dump.db"; statistics-file "/var/named/data/named_stats.txt"; memstatistics-file "/var/named/data/named_mem_stats.txt"; allow-query { any; }; recursion yes; allow-recursion { localhost; 192.168.2.0/24; }; dnssec-enable yes; dnssec-validation yes; dnssec-lookaside auto; /* Path to ISC DLV key */ bindkeys-file "/etc/named.iscdlv.key"; managed-keys-directory "/var/named/dynamic"; forward first; forwarders {192.168.1.1;}; }; logging { channel default_debug { file "data/named.run"; severity dynamic; }; }; zone "." IN { type hint; file "named.ca"; }; include "/etc/named.rfc1912.zones"; include "/etc/named.root.key"; zone "accswift.com" { type master; file "/var/named/accswift.com.hosts"; allow-transfer { 127.0.0.1; localnets; 208.73.211.69; }; }; zone "ns1.accswift.com" { type master; file "/var/named/ns1.accswift.com.hosts"; }; ==================================== Can anybody find any flaw in this? I am still unable to reach accswift.com from any other ISP. But it is browsable from the same network though. Thanks in advance.

    Read the article

  • Weird routing issue (updated)

    - by smccloud
    I just updated the route tables due to a mistake on my part. I am working on getting networking working correctly on a cluster of 14 virtual servers at a customer site. 11 of them work fine for routing and 3 don't work correctly for their administrative network (172.28.56.0). All are running Windows Web Server 2008R2. Default gateway is set on the production network (172.28.58.0) and not on the administrative network (handled with persistent static routes). On a working server, route print gives me the following (MACs redacted) =========================================================================== Interface List 11...XX XX XX XX XX XX ......Intel(R) PRO/1000 MT Network Connection 13...XX XX XX XX XX XX00 0c 29 85 b2 98 ......Intel(R) PRO/1000 MT Network Connection #2 1...........................Software Loopback Interface 1 12...00 00 00 00 00 00 00 e0 Microsoft ISATAP Adapter 14...00 00 00 00 00 00 00 e0 Microsoft ISATAP Adapter #2 15...00 00 00 00 00 00 00 e0 Teredo Tunneling Pseudo-Interface =========================================================================== IPv4 Route Table =========================================================================== Active Routes: Network Destination Netmask Gateway Interface Metric 0.0.0.0 0.0.0.0 172.28.58.1 172.28.58.11 266 10.18.1.22 255.255.255.255 172.28.58.1 172.28.58.11 11 10.32.0.0 255.255.0.0 172.28.56.1 172.28.56.201 11 127.0.0.0 255.0.0.0 On-link 127.0.0.1 306 127.0.0.1 255.255.255.255 On-link 127.0.0.1 306 127.255.255.255 255.255.255.255 On-link 127.0.0.1 306 172.28.34.0 255.255.255.0 172.28.56.1 172.28.56.201 11 172.28.42.0 255.255.255.0 172.28.56.1 172.28.56.201 11 172.28.56.0 255.255.255.0 On-link 172.28.56.201 266 172.28.56.0 255.255.255.0 172.28.56.1 172.28.56.201 11 172.28.56.201 255.255.255.255 On-link 172.28.56.201 266 172.28.56.255 255.255.255.255 On-link 172.28.56.201 266 172.28.58.0 255.255.255.224 On-link 172.28.58.11 266 172.28.58.0 255.255.255.224 172.28.58.1 172.28.58.11 11 172.28.58.1 255.255.255.255 172.28.58.1 172.28.58.11 11 172.28.58.11 255.255.255.255 On-link 172.28.58.11 266 172.28.58.31 255.255.255.255 On-link 172.28.58.11 266 172.28.60.0 255.255.255.0 172.28.56.1 172.28.56.201 11 172.28.63.0 255.255.255.0 172.28.56.1 172.28.56.201 11 192.168.0.0 255.255.0.0 172.28.56.1 172.28.56.201 11 224.0.0.0 240.0.0.0 On-link 127.0.0.1 306 224.0.0.0 240.0.0.0 On-link 172.28.56.201 266 224.0.0.0 240.0.0.0 On-link 172.28.58.11 266 255.255.255.255 255.255.255.255 On-link 127.0.0.1 306 255.255.255.255 255.255.255.255 On-link 172.28.56.201 266 255.255.255.255 255.255.255.255 On-link 172.28.58.11 266 =========================================================================== Persistent Routes: Network Address Netmask Gateway Address Metric 172.28.56.0 255.255.255.0 172.28.56.1 1 172.28.63.0 255.255.255.0 172.28.56.1 1 192.168.0.0 255.255.0.0 172.28.56.1 1 172.28.60.0 255.255.255.0 172.28.56.1 1 10.32.0.0 255.255.0.0 172.28.56.1 1 172.28.34.0 255.255.255.0 172.28.56.1 1 172.28.42.0 255.255.255.0 172.28.56.1 1 0.0.0.0 0.0.0.0 172.28.58.1 Default =========================================================================== IPv6 Route Table =========================================================================== Active Routes: If Metric Network Destination Gateway 1 306 ::1/128 On-link 1 306 ff00::/8 On-link =========================================================================== Persistent Routes: None On one of the non-working server, route print gives me the following (MACs redacted) =========================================================================== Interface List 11...XX XX XX XX XX XX ......Intel(R) PRO/1000 MT Network Connection 13...XX XX XX XX XX XX ......Intel(R) PRO/1000 MT Network Connection #2 1...........................Software Loopback Interface 1 12...00 00 00 00 00 00 00 e0 Microsoft ISATAP Adapter 14...00 00 00 00 00 00 00 e0 Microsoft ISATAP Adapter #2 16...00 00 00 00 00 00 00 e0 Teredo Tunneling Pseudo-Interface =========================================================================== IPv4 Route Table =========================================================================== Active Routes: Network Destination Netmask Gateway Interface Metric 0.0.0.0 0.0.0.0 172.28.58.1 172.28.58.21 266 10.32.0.0 255.255.0.0 172.28.56.1 172.28.56.211 11 127.0.0.0 255.0.0.0 On-link 127.0.0.1 306 127.0.0.1 255.255.255.255 On-link 127.0.0.1 306 127.255.255.255 255.255.255.255 On-link 127.0.0.1 306 172.28.34.0 255.255.255.0 172.28.56.1 172.28.56.211 11 172.28.42.0 255.255.255.0 172.28.56.1 172.28.56.211 11 172.28.56.0 255.255.255.0 172.28.56.1 172.28.56.211 11 172.28.56.211 255.255.255.255 On-link 172.28.56.211 266 172.28.58.0 255.255.255.0 172.28.58.1 172.28.58.21 11 172.28.58.0 255.255.255.224 On-link 172.28.58.21 266 172.28.58.21 255.255.255.255 On-link 172.28.58.21 266 172.28.58.31 255.255.255.255 On-link 172.28.58.21 266 172.28.60.0 255.255.255.0 172.28.56.1 172.28.56.211 11 172.28.63.0 255.255.255.0 172.28.56.1 172.28.56.211 11 192.168.0.0 255.255.0.0 172.28.56.1 172.28.56.211 11 224.0.0.0 240.0.0.0 On-link 127.0.0.1 306 224.0.0.0 240.0.0.0 On-link 172.28.56.211 266 224.0.0.0 240.0.0.0 On-link 172.28.58.21 266 255.255.255.255 255.255.255.255 On-link 127.0.0.1 06 255.255.255.255 255.255.255.255 On-link 172.28.56.211 266 255.255.255.255 255.255.255.255 On-link 172.28.58.21 266 =========================================================================== Persistent Routes: Network Address Netmask Gateway Address Metric 172.28.56.0 255.255.255.0 172.28.56.1 1 172.28.60.0 255.255.255.0 172.28.56.1 1 172.28.63.0 255.255.255.0 172.28.56.1 1 172.28.34.0 255.255.255.0 172.28.56.1 1 172.28.42.0 255.255.255.0 172.28.56.1 1 192.168.0.0 255.255.0.0 172.28.56.1 1 10.32.0.0 255.255.0.0 172.28.56.1 1 0.0.0.0 0.0.0.0 172.28.58.1 Default 172.28.58.0 255.255.255.0 172.28.58.1 1 =========================================================================== IPv6 Route Table =========================================================================== Active Routes: If Metric Network Destination Gateway 1 306 ::1/128 On-link 1 306 ff00::/8 On-link =========================================================================== Persistent Routes: None I am at a complete loss why the non-working servers have no On-link route for 172.28.56.0. Does anyone have any suggestions on what I should be looking at to figure this out? Also, I do have "physical" access to the console if needed through vSphere Client.

    Read the article

  • Open source CMS for a university department

    - by Greg Kuperberg
    I realize that this type of question gets asked over and over again. Nonetheless, I want to ask a more specific version. I'm in a university math department. Long ago our sysadmins (or just one at the time) switched to a web content management system. At the time, Zope looked like an informed choice. We have used Zope for years, but at least in my opinion, it has always been a controversial decision. At the time I didn't understand why it was so important to have a web CMS. Now I see that it certainly is important, but I don't know that it should be Zope. The good (even necessary) features of Zope for us are: It's free and Linux-based. It is a true CMS and not something else (e.g. wiki or blog) It lets you write HTML and scripts. What I really don't like about Zope is that the outcome of using it is all-or-nothing in a lot of ways. At least in convenient use, it ends up dividing the enterprise into superusers who can do everything, and lusers who can't do anything (except write their own home pages in plain HTML). It has a huge user manual, which end users won't have time to read. Somehow with the access permissions, the simple thing to do is to let a few admins access all of the source and data and that's it. Since this is a math department, the user base varies from real novices to people who understand computers reasonably well. But as it stands, any change that involves Zope has to go through the sysadmins. When the sysadmins are in a hurry, sometimes they will also just add plain HTML pages to the web site instead of using the Zope framework. It doesn't help matters that Zope is fairly disk-intensive and fairly hype-intensive. Not to dwell on Zope too much, but I am wondering what is the right web CMS for a mixed user base of terminal novices, quick studies, and experienced users. Some users might want intermediate permissions, e.g. read permission but not write permission, or permission to change some subset of the pages or see some subset of the database tables. Also it should be Linux-based and open source and a little bit scalable, and of course widely used and well-supported is a good idea. I might guess that the answer is Drupal just because that was the general answer before, but I don't know if it is the right type of CMS for this purpose. (But note that Python is a relatively popular language in a math department, among other reasons because Sage is based on Python.) I can see that I didn't completely define the question and that people are guessing what type of site it is. It is the UC Davis Math Department. The main structure of the site is not suitable for a wiki and it is also not the same thing as a course environment like Moodle. Rather, the site is mostly structured as a generic medium-small enterprise. Some components of the site could be a wiki, Moodle, LaTeX plugin, Request Tracker, etc. However, the main issue is not these components. The main issue is that it would be better to decentralize management of the site. Right now, everything that is in the Zope CMS has to go through the sysadmins. Every other user in the department either has to put in a request to them, or write their own web pages with no help from Zope. There are two main reasons for this: (1) Other people in the department don't have time to read the Zope manual. (2) It's a hassle to set up intermediate permissions in Zope. However, there are other people in the department who know how to write computer programs and use markup languages. I wouldn't want a solution that assumes that users either can't be trusted with much more than drag-and-drop, or that they are IT professionals who sleep with documentation manuals. I'm wondering if Plone/Zope still has this quality, since certainly Zope by itself does. But I also wonder sometimes if common-sense flexibility is unfashionable these days, and that things in general have be either mindlessly easy or incredibly powerful.

    Read the article

  • File Access problems with SLES 10 SP2 OES2 SP1

    - by Blackhawk131
    We have identified a couple of repeatable, demonstrable scenarios with unexplained rejected folder access on our servers for Mac users. Hopefully, this can be presented to Novell for a solution. What we did to demonstrate scenario 1; 1. setup a PC and Mac side-by-side 2. login to our server and open up to a central location on both Mac and PC 3. on the PC in that central location create a folder 4. on the Mac in that central location drag the created folder to the Mac desktop, this should work fine, no problem 5. on the PC rename that folder 6. on the Mac drag a file to that renamed folder, this should error with the following message; a. You cannot copy some of these items to the destination because their names are too long for the destination. Do you want to skip copying these items and continue copying the other items? b. Select skip, response is the filename is copied to the location with zero or small byte size. Try opening it and you get file is corrupted error message. What we did to demonstrate scenario 2; 1. setup a PC and Mac side-by-side 2. login to our server and open up to a central location on both Mac and PC 3. on the PC in that central location create a folder then create a subfolder 4. copy some content into the subfolder 5. on the Mac in that central location drag the created top level folder to the Mac desktop, this should work fine, no problem 6. on the PC rename that subfolder 7. on the Mac drag that top level folder to the Mac desktop, this should error on the Mac with the following; a. The operation cannot be completed because you do not have sufficient privileges for b. The operation cannot be completed because you do not have sufficient privileges for 8. on the Mac, if you open that subfolder you can see the file copied in step 4 above but, you can not open that file, you get the following message if you try; a. There was an error opening this document. You do not have permission to open this file. 9. on the PC drag some content into the top level folder 10. on the Mac you can open that file directly from the server or copy it locally, no problem, however-the subfolder is still corrupted or locked, whichever 11. on the PC rename the top level folder 12. on the Mac that same file just opened in step 10 above is now not accessible, get the following message; a. The document could not be opened. I have observed some variances in the above. For instance, a change on the PC side may take a moment before you can observer or act on the Mac side - kind of like the server is slow to respond. Also, the error message may vary. However, the key is once a folder, or subfolder, gets renamed by a PC, Mac problems commence. The solution is to create a new folder from a PC and copy the contents of the corrupted folder to the new folder and not rename the folder name. This has to be done on a PC because the corrupted folder is not accessible by a Mac user. Another problem that dovetails with the above is that we know certain characters are not allowed for PC folder or filenames. If a Mac user creates a folder with a slash in the file name, from the PC the user does not see that slash in the name. As soon as the PC user copies a file to that folder, the Mac user is locked from that folder. Will get the following error message; - Sorry, the operation could not be completed because an unexpected error occurred. - (Error code - 50) In addition to the above mentioned character issue with folders, the problem is more evil with filenames. If, for example, you create a file with a slash in the filename on a Mac and copy it to the server you will get the following error message; - You cannot copy some of these items to the destination because their names are too long for the destination. Do you want to skip copying these items and continue copying the other items? Select either Stop or Skip buttons. It does not matter which button is selected. The file name gets copied to the destination location at a reduced size. Depending on the file type, the icon associated with the file may or may not be present. Furthermore, if you open that file on the server you will get the following message; - Couldnt open the file. It may be corrupt or a file format that doesnt recognize. From the users perspective, if they are not observant of the icon or file size, they may disregard the error message and think their file has copied as intended. Only later do they discover the file is corrupt if they open that file. I want to make a note on this problem. It is the PC causing the issue. You can change folder and file names all day on a MAC and you don't have a problem as long as a character is not the issue. Once you change the file name or folder name from a PC the entire folder structure from that level down is corrupted. But it has to be resolved from a PC by creating a new folder and copying the contents to the new folder like stated above. Is something not configured correctly? SUSE Linux Enterprise Server 10 (x86_64) VERSION = 10 PATCHLEVEL = 2 LSB_VERSION="core-2.0-noarch:core-3.0-noarch:core-2.0-x86_64:core-3.0-x86_64" Novell Open Enterprise Server 2.0.1 (x86_64) VERSION = 2.0.1 PATCHLEVEL = 1 BUILD Note: We use Novell clients on all windows systems to connect to the servers for file access and network storage. We use AFP to allow OSx systems to connect to servers.

    Read the article

  • Setup routing and iptables for new VPN connection to redirect **only** ports 80 and 443

    - by Steve
    I have a new VPN connection (using openvpn) to allow me to route around some ISP restrictions. Whilst it is working fine, it is taking all the traffic over the vpn. This is causing me issues for downloading (my internet connection is a lot faster than the vpn allows), and for remote access. I run an ssh server, and have a daemon running that allows me to schdule downloads via my phone. I have my existing ethernet connection on eth0, and the new VPN connection on tun0. I believe I need to setup the default route to use my existing eth0 connection on the 192.168.0.0/24 network, and set the default gateway to 192.168.0.1 (my knowledge is shaky as I haven't done this for a number of years). If that is correct, then I'm not exactly sure how to do it!. My current routing table is: Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface MSS Window irtt 0.0.0.0 10.51.0.169 0.0.0.0 UG 0 0 0 tun0 0 0 0 10.51.0.1 10.51.0.169 255.255.255.255 UGH 0 0 0 tun0 0 0 0 10.51.0.169 0.0.0.0 255.255.255.255 UH 0 0 0 tun0 0 0 0 85.25.147.49 192.168.0.1 255.255.255.255 UGH 0 0 0 eth0 0 0 0 169.254.0.0 0.0.0.0 255.255.0.0 U 1000 0 0 eth0 0 0 0 192.168.0.0 0.0.0.0 255.255.255.0 U 1 0 0 eth0 0 0 0 After fixing the routing, I believe I need to use iptables to configure prerouting or masquerading to force everything for destination port 80 or 443 over tun0. Again, I'm not exactly sure how to do this! Everything I've found on the internet is trying to do something far more complicated, and trying to sort the wood from the trees is proving difficult. Any help would be much appreciated. UPDATE So far, from the various sources, I've cobbled together the following: #!/bin/sh DEV1=eth0 IP1=`ifconfig|perl -nE'/dr:(\S+)/&&say$1'|grep 192.` GW1=192.168.0.1 TABLE1=internet TABLE2=vpn DEV2=tun0 IP2=`ifconfig|perl -nE'/dr:(\S+)/&&say$1'|grep 10.` GW2=`route -n | grep 'UG[ \t]' | awk '{print $2}'` ip route flush table $TABLE1 ip route flush table $TABLE2 ip route show table main | grep -Ev ^default | while read ROUTE ; do ip route add table $TABLE1 $ROUTE ip route add table $TABLE2 $ROUTE done ip route add table $TABLE1 $GW1 dev $DEV1 src $IP1 ip route add table $TABLE2 $GW2 dev $DEV2 src $IP2 ip route add table $TABLE1 default via $GW1 ip route add table $TABLE2 default via $GW2 echo "1" > /proc/sys/net/ipv4/ip_forward echo "1" > /proc/sys/net/ipv4/ip_dynaddr ip rule add from $IP1 lookup $TABLE1 ip rule add from $IP2 lookup $TABLE2 ip rule add fwmark 1 lookup $TABLE1 ip rule add fwmark 2 lookup $TABLE2 iptables -t nat -A POSTROUTING -o $DEV1 -j SNAT --to-source $IP1 iptables -t nat -A POSTROUTING -o $DEV2 -j SNAT --to-source $IP2 iptables -t nat -A PREROUTING -m state --state ESTABLISHED,RELATED -j CONNMARK --restore-mark iptables -A OUTPUT -m state --state ESTABLISHED,RELATED -j CONNMARK --restore-mark iptables -t nat -A PREROUTING -i $DEV1 -m state --state NEW -j CONNMARK --set-mark 1 iptables -t nat -A PREROUTING -i $DEV2 -m state --state NEW -j CONNMARK --set-mark 2 iptables -t nat -A PREROUTING -m connmark --mark 1 -j MARK --set-mark 1 iptables -t nat -A PREROUTING -m connmark --mark 2 -j MARK --set-mark 2 iptables -t nat -A PREROUTING -m state --state NEW -m connmark ! --mark 0 -j CONNMARK --save-mark iptables -t mangle -A PREROUTING -i $DEV2 -m state --state NEW -p tcp --dport 80 -j CONNMARK --set-mark 2 iptables -t mangle -A PREROUTING -i $DEV2 -m state --state NEW -p tcp --dport 443 -j CONNMARK --set-mark 2 route del default route add default gw 192.168.0.1 eth0 Now this seems to be working. Except it isn't! Connections to the blocked websites are going through, connections not on ports 80 and 443 are using the non-VPN connection. However port 80 and 443 connections that aren't to the blocked websites are using the non-VPN connection too! As the general goal has been reached, I'm relatively happy, but it would be nice to know why it isn't working exactly right. Any ideas? For reference, I now have 3 routing tables, main, internet, and vpn. The listing of them is as follows... Main: default via 192.168.0.1 dev eth0 10.38.0.1 via 10.38.0.205 dev tun0 10.38.0.205 dev tun0 proto kernel scope link src 10.38.0.206 85.removed via 192.168.0.1 dev eth0 169.254.0.0/16 dev eth0 scope link metric 1000 192.168.0.0/24 dev eth0 proto kernel scope link src 192.168.0.73 metric 1 Internet: default via 192.168.0.1 dev eth0 10.38.0.1 via 10.38.0.205 dev tun0 10.38.0.205 dev tun0 proto kernel scope link src 10.38.0.206 85.removed via 192.168.0.1 dev eth0 169.254.0.0/16 dev eth0 scope link metric 1000 192.168.0.0/24 dev eth0 proto kernel scope link src 192.168.0.73 metric 1 192.168.0.1 dev eth0 scope link src 192.168.0.73 VPN: default via 10.38.0.205 dev tun0 10.38.0.1 via 10.38.0.205 dev tun0 10.38.0.205 dev tun0 proto kernel scope link src 10.38.0.206 85.removed via 192.168.0.1 dev eth0 169.254.0.0/16 dev eth0 scope link metric 1000 192.168.0.0/24 dev eth0 proto kernel scope link src 192.168.0.73 metric 1

    Read the article

  • Can't ping other machines at Linux VPN PPTP server's local lan from outside

    - by Marco Sanchez
    Before anything else, hello guys, this is the first time I ask for something here so I hope someone can give me a hand, please look at the following network diagram: --------------------------------------------------------------- VPN Server Webserver (SuSE SLES11) | | | ------- VPN LAN -------- | Router with Unique IP (With Port Forwarding rules set and VPN through enabled) | PPTP connection over Internet | Workstation (PC or Laptop with Windows) --------------------------------------------------------------- So the idea is for the workstation to connect to the PPTP Server and then be able to access a Web Application on the Webserver, right now I have the PPTP server configured and the VPN works, I can connect to the SLES11 server with no problems from the workstation and I can ping it and everything works fine but if I try to ping the Webserver from the workstation, I can't reach it, I'm making a mistake somewhere but I don't see where, please note that I'm not a network expert and thus I'd greatly appreciate some specific guidance. Here is some info related to the IPs --------------------------------------------------------------- *** SLES11 VPN Server has 2 Network cards: -- eth0 (Internal Network) IP: 192.168.210.5 MASK: 255.55.255.0 -- eth1 (External Network) IP: 192.168.1.105 MASK: 255.55.255.0 *** Webserver has 1 network card -- eth0 (Internal Network) IP: 192.168.210.221 MASK: 255.55.255.0 *** Workstation -- IP info once connection has been established to the VPN PPP adapter Test VPN Connection: Connection-specific DNS Suffix . : Description . . . . . . . . . . . : Test VPN Connection Physical Address. . . . . . . . . : DHCP Enabled. . . . . . . . . . . : No Autoconfiguration Enabled . . . . : Yes IPv4 Address. . . . . . . . . . . : 192.168.210.110(Preferred) Subnet Mask . . . . . . . . . . . : 255.255.255.255 Default Gateway . . . . . . . . . : 0.0.0.0 DNS Servers . . . . . . . . . . . : 189.209.208.181 (Defined as part of the PPTP Server options config script) 189.209.127.244 Primary WINS Server . . . . . . . : 192.168.210.220 (Defined as part of the PPTP Server options config script) NetBIOS over Tcpip. . . . . . . . : Enabled --------------------------------------------------------------- I also defined the following within IP tables: ------------------------------------------------------------- iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE iptables -A INPUT -i eth0 -p tcp --dport 1723 -j ACCEPT iptables -A INPUT -i eth0 -p gre -j ACCEPT ------------------------------------------------------------- If you need any piece of information from the PPTP server scripts please let me know, the thing is that I can actually connect to the VPN server and access its services and everything but after that I can't reach any other computer on that LAN. Any help would be greatly appreciated and thanks in advance

    Read the article

  • FFmpeg creates emtpy (black) frames

    - by resamsel
    I have a set of images from a timelapse shot (172 JPG files) that I want to convert into a movie. I tried several parameters with FFmpeg, but all I get is a video with black frames (though it has the expected length). ffmpeg -f image2 -vcodec mjpeg -y -i img_%03d.jpg timelapse2.mpg The command above creates this video: http://sdm-net.org/data/timelapse2.mpg What I'm expecting is something like this (created with Time Lapse Assembler.app): https://vimeo.com/39038362 - This is my fallback option, but I'd really like to create timelapse movies from a script. I'm on OSX Lion (10.7.3) with FFmpeg version (0.10) installed via Homebrew. I also tried to find a proper version of mencoder for OSX, but this doesn't seem to be an easy task. Also, ImageMagick's convert doesn't seem to work nicely, it creates really bad output and it seems there's not much I can do about it... Edit: With libx264 and an mp4 container: ffmpeg -f image2 -y -i img_%03d.jpg -vcodec libx264 timelapse4.mp4 Output: ffmpeg version 0.10 Copyright (c) 2000-2012 the FFmpeg developers built on Mar 26 2012 13:47:02 with clang 3.0 (tags/Apple/clang-211.12) configuration: --prefix=/usr/local/Cellar/ffmpeg/0.10 --enable-shared --enable-gpl --enable-version3 --enable-nonfree --enable-hardcoded-tables --enable-libfreetype --cc=/usr/bin/clang --enable-libx264 --enable-libfaac --enable-libmp3lame --enable-librtmp --enable-libtheora --enable-libvorbis --enable-libvpx --enable-libxvid --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libass --disable-ffplay libavutil 51. 34.101 / 51. 34.101 libavcodec 53. 60.100 / 53. 60.100 libavformat 53. 31.100 / 53. 31.100 libavdevice 53. 4.100 / 53. 4.100 libavfilter 2. 60.100 / 2. 60.100 libswscale 2. 1.100 / 2. 1.100 libswresample 0. 6.100 / 0. 6.100 libpostproc 52. 0.100 / 52. 0.100 Input #0, image2, from 'img_%03d.jpg': Duration: 00:00:06.88, start: 0.000000, bitrate: N/A Stream #0:0: Video: mjpeg, yuvj420p, 3888x2592 [SAR 72:72 DAR 3:2], 25 fps, 25 tbr, 25 tbn, 25 tbc [buffer @ 0x7f8ec9415f20] w:3888 h:2592 pixfmt:yuvj420p tb:1/1000000 sar:72/72 sws_param: [libx264 @ 0x7f8ec981d800] using SAR=1/1 [libx264 @ 0x7f8ec981d800] frame MB size (243x162) > level limit (36864) [libx264 @ 0x7f8ec981d800] MB rate (984150) > level limit (983040) [libx264 @ 0x7f8ec981d800] using cpu capabilities: MMX2 SSE2Fast SSSE3 FastShuffle SSE4.2 AVX [libx264 @ 0x7f8ec981d800] profile High, level 5.1 [libx264 @ 0x7f8ec981d800] 264 - core 120 - H.264/MPEG-4 AVC codec - Copyleft 2003-2011 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=12 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00 Output #0, mp4, to 'timelapse4.mp4': Metadata: encoder : Lavf53.31.100 Stream #0:0: Video: h264 (![0][0][0] / 0x0021), yuvj420p, 3888x2592 [SAR 72:72 DAR 3:2], q=-1--1, 25 tbn, 25 tbc Stream mapping: Stream #0:0 -> #0:0 (mjpeg -> libx264) Press [q] to stop, [?] for help frame= 172 fps= 18 q=-1.0 Lsize= 259kB time=00:00:06.80 bitrate= 312.3kbits/s video:256kB audio:0kB global headers:0kB muxing overhead 1.089647% [libx264 @ 0x7f8ec981d800] frame I:1 Avg QP: 9.60 size:212820 [libx264 @ 0x7f8ec981d800] frame P:43 Avg QP:30.50 size: 291 [libx264 @ 0x7f8ec981d800] frame B:128 Avg QP:31.00 size: 285 [libx264 @ 0x7f8ec981d800] consecutive B-frames: 0.6% 0.0% 1.7% 97.7% [libx264 @ 0x7f8ec981d800] mb I I16..4: 22.5% 77.2% 0.3% [libx264 @ 0x7f8ec981d800] mb P I16..4: 0.0% 0.0% 0.0% P16..4: 0.0% 0.0% 0.0% 0.0% 0.0% skip:100.0% [libx264 @ 0x7f8ec981d800] mb B I16..4: 0.0% 0.0% 0.0% B16..8: 0.0% 0.0% 0.0% direct: 0.0% skip:100.0% L0: 1.2% L1:98.8% BI: 0.0% [libx264 @ 0x7f8ec981d800] 8x8 transform intra:77.2% inter:100.0% [libx264 @ 0x7f8ec981d800] coded y,uvDC,uvAC intra: 41.2% 23.4% 0.6% inter: 0.0% 0.0% 0.0% [libx264 @ 0x7f8ec981d800] i16 v,h,dc,p: 40% 25% 35% 1% [libx264 @ 0x7f8ec981d800] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 36% 32% 30% 1% 0% 0% 0% 0% 0% [libx264 @ 0x7f8ec981d800] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 51% 40% 6% 1% 1% 0% 1% 0% 1% [libx264 @ 0x7f8ec981d800] i8c dc,h,v,p: 60% 21% 19% 0% [libx264 @ 0x7f8ec981d800] Weighted P-Frames: Y:0.0% UV:0.0% [libx264 @ 0x7f8ec981d800] ref P L0: 92.3% 0.0% 0.0% 7.7% [libx264 @ 0x7f8ec981d800] ref B L0: 50.0% 0.0% 50.0% [libx264 @ 0x7f8ec981d800] ref B L1: 99.4% 0.6% [libx264 @ 0x7f8ec981d800] kb/s:304.49 Output timelapse4.mp4 (beacause of spam protection I can only post two links with my reputation): http sdm-net.org/data/timelapse4.mp4

    Read the article

  • LXC Container Networking

    - by digitaladdictions
    I just started to experiment with LXC containers. I was able to create a container and start it up but I cannot get dhcp to assign the container an IP address. If I assign a static address the container can ping the host IP but not outside the host IP. The host is CentOS 6.5 and the guest is Ubuntu 14.04LTS. I used the template downloaded by lxc-create -t download -n cn-01 command. If I am trying to get an IP address on the same subnet as the host I don't believe I should need the IP tables rule for masquerading but I added it anyways. Same with IP forwarding. I compiled LXC by hand from the following source https://linuxcontainers.org/downloads/lxc-1.0.4.tar.gz Host Operating System Version #> cat /etc/redhat-release CentOS release 6.5 (Final) #> uname -a Linux localhost.localdomain 2.6.32-431.20.3.el6.x86_64 #1 SMP Thu Jun 19 21:14:45 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Container Config #> cat /usr/local/var/lib/lxc/cn-01/config # Template used to create this container: /usr/local/share/lxc/templates/lxc-download # Parameters passed to the template: # For additional config options, please look at lxc.container.conf(5) # Distribution configuration lxc.include = /usr/local/share/lxc/config/ubuntu.common.conf lxc.arch = x86_64 # Container specific configuration lxc.rootfs = /usr/local/var/lib/lxc/cn-01/rootfs lxc.utsname = cn-01 # Network configuration lxc.network.type = veth lxc.network.flags = up lxc.network.link = br0 LXC default.confu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:12:30:f2 brd ff:ff:ff:ff:f #> cat /usr/local/etc/lxc/default.conf lxc.network.type = veth lxc.network.link = br0 lxc.network.flags = up #> lxc-checkconfig Kernel configuration not found at /proc/config.gz; searching... Kernel configuration found at /boot/config-2.6.32-431.20.3.el6.x86_64 --- Namespaces --- Namespaces: enabled Utsname namespace: enabled Ipc namespace: enabled Pid namespace: enabled User namespace: enabled Network namespace: enabled Multiple /dev/pts instances: enabled --- Control groups --- Cgroup: enabled Cgroup namespace: enabled Cgroup device: enabled Cgroup sched: enabled Cgroup cpu account: enabled Cgroup memory controller: /usr/local/bin/lxc-checkconfig: line 103: [: too many arguments enabled Cgroup cpuset: enabled --- Misc --- Veth pair device: enabled Macvlan: enabled Vlan: enabled File capabilities: /usr/local/bin/lxc-checkconfig: line 118: [: -gt: unary operator expected Note : Before booting a new kernel, you can check its configuration usage : CONFIG=/path/to/config /usr/local/bin/lxc-checkconfig Network Config (HOST) #> cat /etc/sysconfig/network-scripts/ifcfg-br0 DEVICE=br0 TYPE=Bridge BOOTPROTO=dhcp ONBOOT=yes #> cat /etc/sysconfig/network-scripts/ifcfg-eth0 DEVICE=eth0 ONBOOT=yes TYPE=Ethernet IPV6INIT=no USERCTL=no BRIDGE=br0 #> cat /etc/networks default 0.0.0.0 loopback 127.0.0.0 link-local 169.254.0.0 #> ip a s 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:12:30:f2 brd ff:ff:ff:ff:ff:ff inet6 fe80::20c:29ff:fe12:30f2/64 scope link valid_lft forever preferred_lft forever 3: pan0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN link/ether 42:7e:43:b3:61:c5 brd ff:ff:ff:ff:ff:ff 4: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN link/ether 00:0c:29:12:30:f2 brd ff:ff:ff:ff:ff:ff inet 10.60.70.121/24 brd 10.60.70.255 scope global br0 inet6 fe80::20c:29ff:fe12:30f2/64 scope link valid_lft forever preferred_lft forever 12: vethT6BGL2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether fe:a1:69:af:50:17 brd ff:ff:ff:ff:ff:ff inet6 fe80::fca1:69ff:feaf:5017/64 scope link valid_lft forever preferred_lft forever #> brctl show bridge name bridge id STP enabled interfaces br0 8000.000c291230f2 no eth0 vethT6BGL2 pan0 8000.000000000000 no #> cat /proc/sys/net/ipv4/ip_forward 1 # Generated by iptables-save v1.4.7 on Fri Jul 11 15:11:36 2014 *nat :PREROUTING ACCEPT [34:6287] :POSTROUTING ACCEPT [0:0] :OUTPUT ACCEPT [0:0] -A POSTROUTING -o eth0 -j MASQUERADE COMMIT # Completed on Fri Jul 11 15:11:36 2014 Network Config (Container) #> cat /etc/network/interfaces # This file describes the network interfaces available on your system # and how to activate them. For more information, see interfaces(5). # The loopback network interface auto lo iface lo inet loopback auto eth0 iface eth0 inet dhcp #> ip a s 11: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 02:69:fb:42:ee:d7 brd ff:ff:ff:ff:ff:ff inet6 fe80::69:fbff:fe42:eed7/64 scope link valid_lft forever preferred_lft forever 13: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever

    Read the article

  • JqGrid addJSONData + ASP.NET 2.0 WS

    - by MilosC
    Dear community ! I am a bit lost. I' ve tried to implement a solution based on JqGrid and tried to use function as datatype. I've setted all by the book i guess, i get WS invoked and get JASON back, I got succes on clientside in ajaf call and i "bind" jqGrid using addJSONData but grid remains empty. I do not have any glue now... other "local" samples on same pages works without a problem (jsonstring ...) My WS method looks like : [WebMethod] [ScriptMethod(ResponseFormat = ResponseFormat.Json)] public string GetGridData() { // Load a list InitSessionVariables(); SA.DB.DenarnaEnota.DenarnaEnotaDB db = new SAOP.SA.DB.DenarnaEnota.DenarnaEnotaDB(); DataSet ds = db.GetLookupForDenarnaEnota(SAOP.FW.DB.RecordStatus.All); // Turn into HTML friendly format GetGridData summaryList = new GetGridData(); summaryList.page = "1"; summaryList.total = "10"; summaryList.records = "160"; int i = 0; foreach (DataRow dr in ds.Tables[0].Rows) { GridRows row = new GridRows(); row.id = dr["DenarnaEnotaID"].ToString(); row.cell = "[" + "\"" + dr["DenarnaEnotaID"].ToString() + "\"" + "," + "\"" + dr["Kratica"].ToString() + "\"" + "," + "\"" + dr["Naziv"].ToString() + "\"" + "," + "\"" + dr["Sifra"].ToString() + "\"" + "]"; summaryList.rows.Add(row); } return JsonConvert.SerializeObject(summaryList); } my ASCX code is this: jQuery(document).ready(function(){ jQuery("#list").jqGrid({ datatype : function (postdata) { jQuery.ajax({ url:'../../AjaxWS/TemeljnicaEdit.asmx/GetGridData', data:'{}', dataType:'json', type: 'POST', contentType: "application/json; charset=utf-8", complete: function(jsondata,stat){ if(stat=="success") { var clearJson = jsondata.responseText; var thegrid = jQuery("#list")[0]; var myjsongrid = eval('('+clearJson+')'); alfs thegrid.addJSONData(myjsongrid.replace(/\\/g,'')); } } } ); }, colNames:['DenarnaEnotaID','Kratica', 'Sifra', 'Naziv'], colModel:[ {name:'DenarnaEnotaID',index:'DenarnaEnotaID', width:100}, {name:'Kratica',index:'Kratica', width:100}, {name:'Sifra',index:'Sifra', width:100}, {name:'Naziv',index:'Naziv', width:100}], rowNum:15, rowList:[15,30,100], pager: jQuery('#pager'), sortname: 'id', // loadtext:"Nalagam zapise...", // viewrecords: true, sortorder: "desc", // caption:"Vrstice", // width:"800", imgpath: "../Scripts/JGrid/themes/basic/images"}); }); from WS i GET JSON like this: {”page”:”1?,”total”:”10?,”records”:”160?,”rows”:[{"id":"18","cell":"["18","BAM","Konvertibilna marka","977"]“},{”id”:”19?,”cell”:”["19","RSD","Srbski dinar","941"]“},{”id”:”20?,”cell”:”["20","AFN","Afgani","971"]“},{”id”:”21?,”cell”:”["21","ALL","Lek","008"]“},{”id”:”22?,”cell”:”["22","DZD","Alžirski dinar","012"]“},{”id”:”23?,”cell”:”["23","AOA","Kvanza","973"]“},{”id”:”24?,”cell”:”["24","XCD","Vzhodnokaribski dolar","951"]“},{”id”:”25?,”cell”:” ……………… ["13","PLN","Poljski zlot","985"]“},{”id”:”14?,”cell”:”["14","SEK","Švedska krona","752"]“},{”id”:”15?,”cell”:”["15","SKK","Slovaška krona","703"]“},{”id”:”16?,”cell”:”["16","USD","Ameriški dolar","840"]“},{”id”:”17?,”cell”:”["17","XXX","Nobena valuta","000"]“},{”id”:”1?,”cell”:”["1","SIT","Slovenski tolar","705"]“}]} i have registered this js : clientSideScripts.RegisterClientScriptFile("prototype.js", CommonFunctions.FixupUrlWithoutSessionID("~/WebUI/Scripts/prototype-1.6.0.2.js")); clientSideScripts.RegisterClientScriptFile("jquery.js", CommonFunctions.FixupUrlWithoutSessionID("~/WebUI/Scripts/JGrid/jquery.js")); clientSideScripts.RegisterClientScriptFile("jquery.jqGrid.js", CommonFunctions.FixupUrlWithoutSessionID("~/WebUI/Scripts/JGrid/jquery.jqGrid.js")); clientSideScripts.RegisterClientScriptFile("jqModal.js", CommonFunctions.FixupUrlWithoutSessionID("~/WebUI/Scripts/JGrid/js/jqModal.js")); clientSideScripts.RegisterClientScriptFile("jqDnR.js", CommonFunctions.FixupUrlWithoutSessionID("~/WebUI/Scripts/JGrid/js/jqDnR.js")); Basical i think it must be something stupid ...but i can figure it out now... Help wanted.

    Read the article

  • Entity Framework and multi-tenancy database design

    - by Junto
    I am looking at multi-tenancy database schema design for an SaaS concept. It will be ASP.NET MVC - EF, but that isn't so important. Below you can see an example database schema (the Tenant being the Company). The CompanyId is replicated throughout the schema and the primary key has been placed on both the natural key, plus the tenant Id. Plugging this schema into the Entity Framework gives the following errors when I add the tables into the Entity Model file (Model1.edmx): The relationship 'FK_Order_Customer' uses the set of foreign keys '{CustomerId, CompanyId}' that are partially contained in the set of primary keys '{OrderId, CompanyId}' of the table 'Order'. The set of foreign keys must be fully contained in the set of primary keys, or fully not contained in the set of primary keys to be mapped to a model. The relationship 'FK_OrderLine_Customer' uses the set of foreign keys '{CustomerId, CompanyId}' that are partially contained in the set of primary keys '{OrderLineId, CompanyId}' of the table 'OrderLine'. The set of foreign keys must be fully contained in the set of primary keys, or fully not contained in the set of primary keys to be mapped to a model. The relationship 'FK_OrderLine_Order' uses the set of foreign keys '{OrderId, CompanyId}' that are partially contained in the set of primary keys '{OrderLineId, CompanyId}' of the table 'OrderLine'. The set of foreign keys must be fully contained in the set of primary keys, or fully not contained in the set of primary keys to be mapped to a model. The relationship 'FK_Order_Customer' uses the set of foreign keys '{CustomerId, CompanyId}' that are partially contained in the set of primary keys '{OrderId, CompanyId}' of the table 'Order'. The set of foreign keys must be fully contained in the set of primary keys, or fully not contained in the set of primary keys to be mapped to a model. The relationship 'FK_OrderLine_Customer' uses the set of foreign keys '{CustomerId, CompanyId}' that are partially contained in the set of primary keys '{OrderLineId, CompanyId}' of the table 'OrderLine'. The set of foreign keys must be fully contained in the set of primary keys, or fully not contained in the set of primary keys to be mapped to a model. The relationship 'FK_OrderLine_Order' uses the set of foreign keys '{OrderId, CompanyId}' that are partially contained in the set of primary keys '{OrderLineId, CompanyId}' of the table 'OrderLine'. The set of foreign keys must be fully contained in the set of primary keys, or fully not contained in the set of primary keys to be mapped to a model. The relationship 'FK_OrderLine_Product' uses the set of foreign keys '{ProductId, CompanyId}' that are partially contained in the set of primary keys '{OrderLineId, CompanyId}' of the table 'OrderLine'. The set of foreign keys must be fully contained in the set of primary keys, or fully not contained in the set of primary keys to be mapped to a model. The question is in two parts: Is my database design incorrect? Should I refrain from these compound primary keys? I'm questioning my sanity regarding the fundamental schema design (frazzled brain syndrome). Please feel free to suggest the 'idealized' schema. Alternatively, if the database design is correct, then is EF unable to match the keys because it perceives these foreign keys as a potential mis-configured 1:1 relationships (incorrectly)? In which case, is this an EF bug and how can I work around it?

    Read the article

  • Small performance test on a web service

    - by vtortola
    Hi, I'm trying to develop a small application that test how many requests per second can my service support but I think I'm doing something wrong. The service is in an early development stage, but I'd like to have this test handy in order to check in time to time I'm not doing something that decrease the performance. The problem is that I cannot get the web server or the database server go to the 100% of CPU. I'm using three different computers, in one is the web server (WinSrv Standard 2008 x64 IIS7), in other the database (Win 2K - SQL Server 2005) and the last is my computer (Win7 x64 ultimate), where I'll run the test. The computers are connected through a 100 ethernet switch. The request POST is 9 bytes and the response will be 842 bytes. The test launches several threads, and each thread has a "while" loop, in each loop it creates a WebRequest object, performs a call, increment a common counter and waits between 1 and 5 millisencods, then it do it again: static Int32 counter = 0; static void Main(string[] args) { ServicePointManager.DefaultConnectionLimit = 250; Console.WriteLine("Ready. Press any key..."); Console.ReadKey(); Console.WriteLine("Running..."); String localhost = "localhost"; String linuxmono = "192.168.1.74"; String server= "192.168.1.5:8080"; DateTime start = DateTime.Now; Random r = new Random(DateTime.Now.Millisecond); for (int i = 0; i < 50; i++) { new Thread(new ParameterizedThreadStart(Test)).Start(server); Thread.Sleep(r.Next(1, 3)); } Thread.Sleep(2000); while (true) { Console.WriteLine("Request per second :" + counter/DateTime.Now.Subtract(start).TotalSeconds ); Thread.Sleep(3000); } } public static void Test(Object ip) { Guid guid = Guid.NewGuid(); Random r = new Random(DateTime.Now.Millisecond); while (true) { String test = "<lalala/>"; WebRequest req = WebRequest.Create("http://" + (String)ip + "/WebApp/"+guid.ToString()+"/Data/Tables=whatever"); req.Method = "POST"; req.ContentType = "application/xml"; req.Credentials = new NetworkCredential("aaa", "aaa","domain"); Byte[] array = Encoding.UTF8.GetBytes(test); req.ContentLength = array.Length; using (Stream reqStream = req.GetRequestStream()) { reqStream.Write(array, 0, array.Length); reqStream.Close(); } using (Stream responseStream = req.GetResponse().GetResponseStream()) { String response = new StreamReader(responseStream).ReadToEnd(); if (response.Length != 842) Console.Write(" EEEE "); } Interlocked.Increment(ref counter); Thread.Sleep(r.Next(1,5)); } } If I run the test neither of the computers do an excesive CPU usage. Let's say I get a X requests per second, if I run the console application two times at the same moment, I get X/2 request per second in each one... but still the web server is on 30% of CPU, the database server on 25%... I've tried to remove the thread.sleep in the loop, but it doesn't make a big difference. I'd like to put the machines to the maximun, to check how may requests per second they can provide. I guessed that I could do it in this way... but apparently I'm missing something here... What is the problem? Kind regards.

    Read the article

  • Complex SQL query help on aggregating values for nested subquery

    - by François Beausoleil
    Hi! I have people, companies, employees, events and event kinds. I'm making a report/followup sheet where people, companies and employees are the rows, and the columns are event kinds. Event kinds are simple values describing: "Promised Donation", "Received Donation", "Phoned", "Followed up" and such. Event kinds are ordered: CREATE TABLE event_kinds ( id, name, position); Events hold the actual reference to the event: CREATE TABLE events ( id, person_id, company_id, referrer_id, event_kind_id, created_at); referrer_id is another reference to people. It is the person which sent the information/tip along, and is an optional field, although I sometimes want to filter on an event_kind that has a specific referrer, while I don't for other event kinds. Notice I don't have an employee ID reference. The reference exists, but is implied. I have application code to validate that person_id and company_id really reference an employee record. The other tables are pretty basic: CREATE TABLE people ( id, name); CREATE TABLE companies ( id, name); CREATE TABLE employees ( id, person_id, company_id); I'm trying to achieve the following report: Referrer Phoned Promised Donated Francois Feb 16th Feb 20th Mar 1st Apple (Steve Jobs) Steve Ballmer Mar 3rd IBM Bill Gates Mar 7th The first row is a people record, the 2nd is an employee, and the 3rd is a company. If I asked for referrer Bill Gates for Phoned event kinds, I'd only see the 3rd row, while asking for Steve and Phoned would return no rows. Right now, I do 3 queries, one for companies, one for people and a last one for employees. I want the event kind columns to be ordered, but I do that in application code and show it properly there. Here's where I'm at so far: SELECT companies.id, companies.name, (SELECT events.id FROM events WHERE events.referrer_id = 1470 AND events.company_id = companies.id AND events.person_id IS NULL AND events.event_kind_id = 9 ORDER BY created_at DESC LIMIT 1) event_kind_9, (SELECT events.id FROM events WHERE events.company_id = companies.id AND events.person_id IS NULL AND events.event_kind_id = 10 ORDER BY created_at DESC LIMIT 1) event_kind_10, (SELECT events.created_at FROM events WHERE events.referrer_id = 1470 AND events.company_id = companies.id AND events.person_id IS NULL AND events.event_kind_id = 9 ORDER BY created_at DESC LIMIT 1) event_kind_9_order FROM "companies" SELECT people.id, people.name, (SELECT events.id FROM events WHERE events.referrer_id = 1470 AND events.company_id IS NULL AND events.person_id = people.id AND events.event_kind_id = 9 ORDER BY created_at DESC LIMIT 1) event_kind_9, (SELECT events.id FROM events WHERE events.company_id IS NULL AND events.person_id = people.id AND events.event_kind_id = 10 ORDER BY created_at DESC LIMIT 1) event_kind_10, (SELECT events.created_at FROM events WHERE events.referrer_id = 1470 AND events.company_id IS NULL AND events.person_id = people.id AND events.event_kind_id = 9 ORDER BY created_at DESC LIMIT 1) event_kind_9_order FROM "people" SELECT employees.id, employees.company_id, employees.person_id, (SELECT events.id FROM events WHERE events.referrer_id = 1470 AND events.company_id = employees.company_id AND events.person_id = employees.person_id AND events.event_kind_id = 9 ORDER BY created_at DESC LIMIT 1) event_kind_9, (SELECT events.id FROM events WHERE events.company_id = employees.company_id AND events.person_id = employees.person_id AND events.event_kind_id = 10 ORDER BY created_at DESC LIMIT 1) event_kind_10, (SELECT events.created_at FROM events WHERE events.referrer_id = 1470 AND events.company_id = employees.company_id AND events.person_id = employees.person_id AND events.event_kind_id = 9 ORDER BY created_at DESC LIMIT 1) event_kind_9_order FROM "employees" I rather suspect I'm doing this wrong. There should be an "easier" way to do it. One other filter criteria would be to filter on people/company names: WHERE LOWER(companies.name) LIKE '%apple%'. Note that I'm ordering by the dates of event_kind_9 here, and a secondary sort is by person/company name. To summarize: I want to paginate the result set, find the latest event for each cell, order the result set by the date of the latest event, and by company/person name, filter by referrer in some event kinds, but not others. For reference, I'm using PostgreSQL, from Ruby, ActiveRecord/Rails. The solution is pure SQL though.

    Read the article

  • Getting a null reference exception exporting a slightly complex rdlc report to excel

    - by George Handlin
    Have an issue in exporting a slightly complex report to excel from an rdlc. Exporting a simple tabular report works fine, but getting a null reference exception with a more complex one. As is usually the case, it works on the server in the development environment, but not in the test environment. The server in the development environment has iis, sql and sql reporting services on it and test only has iis (I didn't set them up). I'm guessing there is something that gets installed with SSRS that is not installed with just the report viewer. The report has several parameters going into it and a few generic lists going in for datasources. Some tables for display as well as a list. Working with the 2005 version. Exception follows: [NullReferenceException: Object reference not set to an instance of an object.] Microsoft.ReportingServices.Rendering.ExcelRenderer.ExcelRenderer.RenderPageLayout(PageLayout pageLayout, Int32& currentPageNumber, Stack& stack) +91 Microsoft.ReportingServices.Rendering.ExcelRenderer.ExcelRenderer.RenderPageCollection(PageCollection pageCollection, Int32& currentPageNumber, Stack& stack, PageLayout& lastPageLayout) +607 Microsoft.ReportingServices.Rendering.ExcelRenderer.ExcelRenderer.GenerateWorkSheets() +150 Microsoft.ReportingServices.Rendering.ExcelRenderer.ExcelRenderer.GenerateMainSheet() +358 Microsoft.ReportingServices.Rendering.ExcelRenderer.ExcelRenderer.RenderExcelWorkBook(CreateAndRegisterStream createAndRegisterStream) +158 Microsoft.ReportingServices.Rendering.ExcelRenderer.ExcelRenderer.ProcessReport(CreateAndRegisterStream createAndRegisterStream) +385 Microsoft.ReportingServices.Rendering.ExcelRenderer.ExcelRenderer.Render(Report report, NameValueCollection reportServerParameters, NameValueCollection deviceInfo, NameValueCollection clientCapabilities, EvaluateHeaderFooterExpressions evaluateHeaderFooterExpressions, CreateAndRegisterStream createAndRegisterStream) +230 [ReportRenderingException: An error occurred during rendering of the report.] Microsoft.ReportingServices.Rendering.ExcelRenderer.ExcelRenderer.Render(Report report, NameValueCollection reportServerParameters, NameValueCollection deviceInfo, NameValueCollection clientCapabilities, EvaluateHeaderFooterExpressions evaluateHeaderFooterExpressions, CreateAndRegisterStream createAndRegisterStream) +296 Microsoft.ReportingServices.ReportProcessing.ReportProcessing.RenderSnapshot(IRenderingExtension renderer, CreateReportChunk createChunkCallback, RenderingContext rc, GetResource getResourceCallback) +1124 [WrapperReportRenderingException: An error occurred during rendering of the report.] Microsoft.ReportingServices.ReportProcessing.ReportProcessing.RenderSnapshot(IRenderingExtension renderer, CreateReportChunk createChunkCallback, RenderingContext rc, GetResource getResourceCallback) +1681 Microsoft.Reporting.LocalService.RenderWithDataCache(PreviewItemContext itemContext, ParameterInfoCollection reportParameters, IEnumerable dataSources, DatasourceCredentialsCollection credentials, IRenderingExtension renderer, ReportProcessing repProc, CreateAndRegisterStream createStreamCallback, ReportRuntimeSetup runtimeSetup) +1693 Microsoft.Reporting.LocalService.Render(PreviewItemContext itemContext, Boolean allowInternalRenderers, ParameterInfoCollection reportParameters, IEnumerable dataSources, DatasourceCredentialsCollection credentials, CreateAndRegisterStream createStreamCallback, ReportRuntimeSetup runtimeSetup, ProcessingMessageList& warnings) +227 Microsoft.Reporting.WebForms.LocalReport.InternalRender(String format, Boolean allowInternalRenderers, String deviceInfo, CreateAndRegisterStream createStreamCallback, Warning[]& warnings) +235 [LocalProcessingException: An error occurred during local report processing.] Microsoft.Reporting.WebForms.LocalReport.InternalRender(String format, Boolean allowInternalRenderers, String deviceInfo, CreateAndRegisterStream createStreamCallback, Warning[]& warnings) +276 Microsoft.Reporting.WebForms.LocalReport.InternalRender(String format, Boolean allowInternalRenderers, String deviceInfo, String& mimeType, String& encoding, String& fileNameExtension, String[]& streams, Warning[]& warnings) +189 Microsoft.Reporting.WebForms.LocalReport.Render(String format, String deviceInfo, String& mimeType, String& encoding, String& fileNameExtension, String[]& streams, Warning[]& warnings) +28 Microsoft.Reporting.WebForms.LocalReportControlSource.RenderReport(String format, String deviceInfo, NameValueCollection additionalParams, String& mimeType, String& fileNameExtension) +52 Microsoft.Reporting.WebForms.ExportOperation.PerformOperation(NameValueCollection urlQuery, HttpResponse response) +153 Microsoft.Reporting.WebForms.HttpHandler.ProcessRequest(HttpContext context) +202 System.Web.CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() +303 System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) +64

    Read the article

  • ASP.NET MVC : AJAX ActionLink - Persist Data

    - by Mio
    Hi guys, I'm really new at this and I was searching the web for an answer to my question and I couldn't find it, so here I am posting my question :) I'm trying to create a new record in my table Facility. For my goreign keys I'm displaying the choices in tables instead of a dropdowns. When the user clicks on the select link which is an Ajax.ActionLink(), I wanna retrieve the right record from the DB and set the foreign key of my object Facility to the one slected and replace the Div by a new Partial View. The problem is when I try to submit the form, the Facility object doesnt seem to have the foreign key that I've just set in my ajax fuction in my controller. And if the user has enter some data in the other fields of the create form, I don't them to lose what they already entered. Here's my code. Model only contains a Facility. public ActionResult Create() { Model.Facility = new Facility(); return View(Model); } This is part of my Create View <div id="FacilityTypePartialView"> <% Html.RenderPartial("FacilityType"); %> </div> This is my Partial View FacilityType <% if (Model.IsNewFacility()) { %> <p> Id: <%= Html.Encode(Model.Facility.FacilityType.FId)%> </p> <p> Type: <%= Html.Encode(Model.Facility.FacilityType.FType)%> </p> <p> Description: <%= Html.Encode(Model.Facility.FacilityType.FDescription) %> </p> <% } %> <p> <%= Html.ActionLink("Manage Facility Type", "Index","FacilityType") %> </p> <table id="FacilityTypesList"> <tr> <th> Select </th> <th> FId </th> <th> FType </th> <th> FDescription </th> </tr> <% foreach (var item in Model.GetFacilityTypes()) { %> <tr> <td> <%=Ajax.ActionLink("Select", "FacilityTypeSelect", new { id = item.FId}, new AjaxOptions { UpdateTargetId = "FacilityTypePartialView" })%> </td> <td> <%= Html.Encode(item.FId) %> </td> <td> <%= Html.Encode(item.FType) %> </td> <td> <%= Html.Encode(item.FDescription) %> </td> </tr> <% } %> </table> Here's y Ajax Funcion public PartialViewResult FacilityTypeSelect(int id) { Facility facility = new Facility(); facility.FacilityType = _repository.GetFacilityType(id); Model.Facility = facility; if (this.Request.IsAjaxRequest() == false) { return PartialView("FacilityType/FacilityType", Model); } else { return PartialView("FacilityType/FacilityTypeSelected", Model); } } Finally, my Post method [AcceptVerbs(HttpVerbs.Post)] public ActionResult Create( Facility facility) { Model.Facility = facility; if (ModelState.IsValid) { try { _repository.AddEntity(facility); _repository.Save(); return RedirectToAction("Details", new { id = facility.Id }); } catch { } } return View("Create", Model); } My Faciliy object coming from the View have the Facility.FacilityType set to nothing.

    Read the article

  • Migrate from MySQL to PostgreSQL on Linux (Kubuntu)

    - by Dave Jarvis
    A long time ago in a galaxy far, far away... Trying to migrate a database from MySQL to PostgreSQL. All the documentation I have read covers, in great detail, how to migrate the structure. I have found very little documentation on migrating the data. The schema has 13 tables (which have been migrated successfully) and 9 GB of data. MySQL version: 5.1.x PostgreSQL version: 8.4.x I want to use the R programming language to analyze the data using SQL select statements; PostgreSQL has PL/R, but MySQL has nothing (as far as I can tell). A New Hope Create the database location (/var has insufficient space; also dislike having the PostgreSQL version number everywhere -- upgrading would break scripts!): sudo mkdir -p /home/postgres/main sudo cp -Rp /var/lib/postgresql/8.4/main /home/postgres sudo chown -R postgres.postgres /home/postgres sudo chmod -R 700 /home/postgres sudo usermod -d /home/postgres/ postgres All good to here. Next, restart the server and configure the database using these installation instructions: sudo apt-get install postgresql pgadmin3 sudo /etc/init.d/postgresql-8.4 stop sudo vi /etc/postgresql/8.4/main/postgresql.conf Change data_directory to /home/postgres/main sudo /etc/init.d/postgresql-8.4 start sudo -u postgres psql postgres \password postgres sudo -u postgres createdb climate pgadmin3 Use pgadmin3 to configure the database and create a schema. The episode continues in a remote shell known as bash, with both databases running, and the installation of a set of tools with a rather unusual logo: SQL Fairy. perl Makefile.PL sudo make install sudo apt-get install perl-doc (strangely, it is not called perldoc) perldoc SQL::Translator::Manual Extract a PostgreSQL-friendly DDL and all the MySQL data: sqlt -f DBI --dsn dbi:mysql:climate --db-user user --db-password password -t PostgreSQL > climate-pg-ddl.sql mysqldump --skip-add-locks --complete-insert --no-create-db --no-create-info --quick --result-file="climate-my.sql" --databases climate --skip-comments -u root -p The Database Strikes Back Recreate the structure in PostgreSQL as follows: pgadmin3 (switch to it) Click the Execute arbitrary SQL queries icon Open climate-pg-ddl.sql Search for TABLE " replace with TABLE climate." (insert the schema name climate) Search for on " replace with on climate." (insert the schema name climate) Press F5 to execute This results in: Query returned successfully with no result in 122 ms. Replies of the Jedi At this point I am stumped. Where do I go from here (what are the steps) to convert climate-my.sql to climate-pg.sql so that they can be executed against PostgreSQL? How to I make sure the indexes are copied over correctly (to maintain referential integrity; I don't have constraints at the moment to ease the transition)? How do I ensure that adding new rows in PostgreSQL will start enumerating from the index of the last row inserted (and not conflict with an existing primary key from the sequence)? How do you ensure the schema name comes through when transforming the data from MySQL to PostgreSQL inserts? Resources A fair bit of information was needed to get this far: https://help.ubuntu.com/community/PostgreSQL http://articles.sitepoint.com/article/site-mysql-postgresql-1 http://wiki.postgresql.org/wiki/Converting_from_other_Databases_to_PostgreSQL#MySQL http://pgfoundry.org/frs/shownotes.php?release_id=810 http://sqlfairy.sourceforge.net/ Thank you!

    Read the article

  • VB.Net: exception on ExecuteReader() using OleDbCommand and Access

    - by Shane Fagan
    Hi again all, im getting the error below for this SQL statement in VB.Net 'Fill in the datagrid with the info needed from the accdb file 'to make it simple to access the db connstring = "Provider=Microsoft.ACE.OLEDB.12.0;Data " connstring += "Source=" & Application.StartupPath & "\AuctioneerSystem.accdb" 'make the new connection conn = New System.Data.OleDb.OleDbConnection(connstring) 'the sql command SQLString = "SELECT AllPropertyDetails.PropertyID, Street, Town, County, Acres, Quotas, ResidenceDetails, Status, HighestBid, AskingPrice FROM AllPropertyDetails " SQLString += "INNER JOIN Land ON AllPropertyDetails.PropertyID = Land.PropertyID " SQLString += "WHERE Deleted = False " If PriceRadioButton.Checked = True Then SQLString += "ORDER BY AskingPrice ASC" ElseIf AcresRadioButton.Checked = True Then SQLString += "ORDER BY Acres ASC" End If 'try to open the connection conn.Open() 'if the connection is open If ConnectionState.Open.ToString = "Open" Then 'use the sqlstring and conn to create the command cmd = New System.Data.OleDb.OleDbCommand(SQLString, conn) 'read the db and put it into dr dr = cmd.ExecuteReader If dr.HasRows Then 'if there is rows in the db then make sure the list box is clear 'clear the rows and columns if there is rows in the data grid LandDataGridView.Rows.Clear() LandDataGridView.Columns.Clear() 'add the columns LandDataGridView.Columns.Add("PropertyNumber", "Property Number") LandDataGridView.Columns.Add("Address", "Address") LandDataGridView.Columns.Add("Acres", "No. of Acres") LandDataGridView.Columns.Add("Quotas", "Quotas") LandDataGridView.Columns.Add("Details", "Residence Details") LandDataGridView.Columns.Add("Status", "Status") LandDataGridView.Columns.Add("HighestBid", "Highest Bid") LandDataGridView.Columns.Add("Price", "Asking Price") While dr.Read 'output the fields into the data grid LandDataGridView.Rows.Add( _ dr.Item("PropertyID").ToString _ , dr.Item("Street").ToString & " " & dr.Item("Town").ToString & ", " & dr.Item("County").ToString _ , dr.Item("Acres").ToString _ , dr.Item("Quota").ToString _ , dr.Item("ResidenceDetails").ToString _ , dr.Item("Status").ToString _ , dr.Item("HighestBid").ToString _ , dr.Item("AskingPrice").ToString) End While End If 'close the data reader dr.Close() End If 'close the connection conn.Close() Any ideas why its not working? The fields in the DB and the table names seem ok but its not working :/ The tables are AllPropertyDetails ProperyID:Number Street: text Town: text County: text Status: text HighestBid: Currency AskingPrice: Currency Land PropertyID: Number Acres: Number Quotas: Text ResidenceDetails: text System.InvalidOperationException was unhandled Message="An error occurred creating the form. See Exception.InnerException for details. The error is: No value given for one or more required parameters." Source="AuctioneerProject" StackTrace: at AuctioneerProject.My.MyProject.MyForms.Create__Instance__[T](T Instance) in 17d14f5c-a337-4978-8281-53493378c1071.vb:line 190 at AuctioneerProject.My.MyProject.MyForms.get_LandReport() at AuctioneerProject.ReportsMenu.LandButton_Click(Object sender, EventArgs e) in C:\Users\admin\Desktop\Auctioneers\AuctioneerProject\AuctioneerProject\ReportsMenu.vb:line 4 at System.Windows.Forms.Control.OnClick(EventArgs e) at System.Windows.Forms.Button.OnClick(EventArgs e) at System.Windows.Forms.Button.OnMouseUp(MouseEventArgs mevent) at System.Windows.Forms.Control.WmMouseUp(Message& m, MouseButtons button, Int32 clicks) at System.Windows.Forms.Control.WndProc(Message& m) at System.Windows.Forms.ButtonBase.WndProc(Message& m) at System.Windows.Forms.Button.WndProc(Message& m) at System.Windows.Forms.Control.ControlNativeWindow.OnMessage(Message& m) at System.Windows.Forms.Control.ControlNativeWindow.WndProc(Message& m) at System.Windows.Forms.NativeWindow.DebuggableCallback(IntPtr hWnd, Int32 msg, IntPtr wparam, IntPtr lparam) at System.Windows.Forms.UnsafeNativeMethods.DispatchMessageW(MSG& msg) at System.Windows.Forms.Application.ComponentManager.System.Windows.Forms.UnsafeNativeMethods.IMsoComponentManager.FPushMessageLoop(Int32 dwComponentID, Int32 reason, Int32 pvLoopData) at System.Windows.Forms.Application.ThreadContext.RunMessageLoopInner(Int32 reason, ApplicationContext context) at System.Windows.Forms.Application.ThreadContext.RunMessageLoop(Int32 reason, ApplicationContext context) at System.Windows.Forms.Application.Run(ApplicationContext context) at Microsoft.VisualBasic.ApplicationServices.WindowsFormsApplicationBase.OnRun() at Microsoft.VisualBasic.ApplicationServices.WindowsFormsApplicationBase.DoApplicationModel() at Microsoft.VisualBasic.ApplicationServices.WindowsFormsApplicationBase.Run(String[] commandLine) at AuctioneerProject.My.MyApplication.Main(String[] Args) in 17d14f5c-a337-4978-8281-53493378c1071.vb:line 81 at System.AppDomain._nExecuteAssembly(Assembly assembly, String[] args) at System.AppDomain.ExecuteAssembly(String assemblyFile, Evidence assemblySecurity, String[] args) at Microsoft.VisualStudio.HostingProcess.HostProc.RunUsersAssembly() at System.Threading.ThreadHelper.ThreadStart_Context(Object state) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Threading.ThreadHelper.ThreadStart() InnerException: System.Data.OleDb.OleDbException ErrorCode=-2147217904 Message="No value given for one or more required parameters." Source="Microsoft Office Access Database Engine" StackTrace: at System.Data.OleDb.OleDbCommand.ExecuteCommandTextErrorHandling(OleDbHResult hr) at System.Data.OleDb.OleDbCommand.ExecuteCommandTextForSingleResult(tagDBPARAMS dbParams, Object& executeResult) at System.Data.OleDb.OleDbCommand.ExecuteCommandText(Object& executeResult) at System.Data.OleDb.OleDbCommand.ExecuteCommand(CommandBehavior behavior, Object& executeResult) at System.Data.OleDb.OleDbCommand.ExecuteReaderInternal(CommandBehavior behavior, String method) at System.Data.OleDb.OleDbCommand.ExecuteReader(CommandBehavior behavior) at System.Data.OleDb.OleDbCommand.ExecuteReader() at AuctioneerProject.LandReport.load_Land() in C:\Users\admin\Desktop\Auctioneers\AuctioneerProject\AuctioneerProject\LandReport.vb:line 37 at AuctioneerProject.LandReport.PriceRadioButton_CheckedChanged(Object sender, EventArgs e) in C:\Users\admin\Desktop\Auctioneers\AuctioneerProject\AuctioneerProject\LandReport.vb:line 79 at System.Windows.Forms.RadioButton.OnCheckedChanged(EventArgs e) at System.Windows.Forms.RadioButton.set_Checked(Boolean value) at AuctioneerProject.LandReport.InitializeComponent() in C:\Users\admin\Desktop\Auctioneers\AuctioneerProject\AuctioneerProject\LandReport.designer.vb:line 40 at AuctioneerProject.LandReport..ctor() in C:\Users\admin\Desktop\Auctioneers\AuctioneerProject\AuctioneerProject\LandReport.vb:line 5 InnerException:

    Read the article

  • ADO.NET (WCF) Data Services Query Interceptor Hangs IIS

    - by PreMagination
    I have an ADO.NET Data Service that's supposed to provide read-only access to a somewhat complex database. Logically I have table-per-type (TPT) inheritance in my data model but the EDM doesn't implement inheritance. (Limitation of EF and navigation properties on derived types. STILL not fixed in EF4!) I can query my EDM directly (using a separate project) using a copy of the query I'm trying to run against the web service, results are returned within 10 seconds. Disabling the query interceptors I'm able to make the same query against the web service, results are returned similarly quickly. I can enable some of the query interceptors and the results are returned slowly, up to a minute or so later. Alternatively, I can enable all the query interceptors, expand less of the properties on the main object I'm querying, and results are returned in a similar period of time. (I've increased some of the timeout periods) Up til this point Sql Profiler indicates the slow-down is the database. (That's a post for a different day) But when I enable all my query interceptors and expand all the properties I'd like to have the IIS worker process pegs the CPU for 20 minutes and a query is never even made against the database. This implies to me that yes, my implementation probably sucks but regardless the Data Services "tier" is having an issue it shouldn't. WCF tracing didn't reveal anything interesting to my untrained eye. Details: Data model: Agent-Person-Student Student has a collection of referrals Students and referrals are private, queries against the web service should only return "your" students and referrals. This means Person and Agent need to be filtered too. Other entities (Agent-Organization-School) can be accessed by anyone who has authenticated. The existing security model is poorly suited to perform this type of filtering for this type of data access, the query interceptors are complicated and cause EF to generate some entertaining sql queries. Sample Interceptor [QueryInterceptor("Agents")] public Expression<Func<Agent, Boolean>> OnQueryAgents() { //Agent is a Person(1), Educator(2), Student(3), or Other Person(13); allow if scope permissions exist return ag => (ag.AgentType.AgentTypeId == 1 || ag.AgentType.AgentTypeId == 2 || ag.AgentType.AgentTypeId == 3 || ag.AgentType.AgentTypeId == 13) && ag.Person.OrganizationPersons.Count<OrganizationPerson>(op => op.Organization.ScopePermissions.Any<ScopePermission> (p => p.ApplicationRoleAccount.Account.UserName == HttpContext.Current.User.Identity.Name && p.ApplicationRoleAccount.Application.ApplicationId == 124) || op.Organization.HierarchyDescendents.Any<OrganizationsHierarchy>(oh => oh.AncestorOrganization.ScopePermissions.Any<ScopePermission> (p => p.ApplicationRoleAccount.Account.UserName == HttpContext.Current.User.Identity.Name && p.ApplicationRoleAccount.Application.ApplicationId == 124))) > 0; } The query interceptors for Person, Student, Referral are all very similar, ie they traverse multiple same/similar tables to look for ScopePermissions as above. Sample Query var referrals = (from r in service.Referrals .Expand("Organization/ParentOrganization") .Expand("Educator/Person/Agent") .Expand("Student/Person/Agent") .Expand("Student") .Expand("Grade") .Expand("ProblemBehavior") .Expand("Location") .Expand("Motivation") .Expand("AdminDecision") .Expand("OthersInvolved") where r.DateCreated >= coupledays && r.DateDeleted == null select r); Any suggestions or tips would be greatly associated, for fixing my current implementation or in developing a new one, with the caveat that the database can't be changed and that ultimately I need to expose a large portion of the database via a web service that limits data access to the data authorized for, for the purpose of data integration with multiple outside parties. THANK YOU!!!

    Read the article

  • Heroku Problem During Database Pull of Rails App: Mysql::Error MySQL server has gone away

    - by Rich Apodaca
    Attempting to pull my database from Heroku gives an error partway through the process (below). Using: Snow Leopard; heroku-1.8.2; taps-0.2.26; rails-2.3.5; mysql-5.1.42. Database is smallish, as you can see from the error message. Heroku tech support says it's a problem on my system, but offers nothing in the way of how to solve it. I've seen the issue reported before - for example here. How can I get around this problem? The error: $ heroku db:pull Auto-detected local database: mysql://[...]@localhost/[...]?encoding=utf8 Receiving schema Receiving data 17 tables, 9,609 records [...] /Library/Ruby/Gems/1.8/gems/sequel-3.0.0/lib/sequel/adapters/mysql.rb:166:in `query': Mysql::Error MySQL server has gone away (Sequel::DatabaseError) from /Library/Ruby/Gems/1.8/gems/sequel-3.0.0/lib/sequel/adapters/mysql.rb:166:in `_execute' from /Library/Ruby/Gems/1.8/gems/sequel-3.0.0/lib/sequel/adapters/mysql.rb:125:in `execute' from /Library/Ruby/Gems/1.8/gems/sequel-3.0.0/lib/sequel/connection_pool.rb:101:in `hold' from /Library/Ruby/Gems/1.8/gems/sequel-3.0.0/lib/sequel/database.rb:461:in `synchronize' from /Library/Ruby/Gems/1.8/gems/sequel-3.0.0/lib/sequel/adapters/mysql.rb:125:in `execute' from /Library/Ruby/Gems/1.8/gems/sequel-3.0.0/lib/sequel/database.rb:296:in `execute_dui' from /Library/Ruby/Gems/1.8/gems/sequel-3.0.0/lib/sequel/dataset.rb:276:in `execute_dui' from /Library/Ruby/Gems/1.8/gems/sequel-3.0.0/lib/sequel/adapters/mysql.rb:365:in `execute_dui' from /Library/Ruby/Gems/1.8/gems/sequel-3.0.0/lib/sequel/dataset/convenience.rb:126:in `import' from /Library/Ruby/Gems/1.8/gems/sequel-3.0.0/lib/sequel/dataset/convenience.rb:126:in `each' from /Library/Ruby/Gems/1.8/gems/sequel-3.0.0/lib/sequel/dataset/convenience.rb:126:in `import' from /Library/Ruby/Gems/1.8/gems/sequel-3.0.0/lib/sequel/adapters/mysql.rb:144:in `transaction' from /Library/Ruby/Gems/1.8/gems/sequel-3.0.0/lib/sequel/connection_pool.rb:108:in `hold' from /Library/Ruby/Gems/1.8/gems/sequel-3.0.0/lib/sequel/database.rb:461:in `synchronize' from /Library/Ruby/Gems/1.8/gems/sequel-3.0.0/lib/sequel/adapters/mysql.rb:138:in `transaction' from /Library/Ruby/Gems/1.8/gems/sequel-3.0.0/lib/sequel/dataset/convenience.rb:126:in `import' from /Library/Ruby/Gems/1.8/gems/taps-0.2.26/lib/taps/client_session.rb:211:in `cmd_receive_data' from /Library/Ruby/Gems/1.8/gems/taps-0.2.26/lib/taps/client_session.rb:203:in `loop' from /Library/Ruby/Gems/1.8/gems/taps-0.2.26/lib/taps/client_session.rb:203:in `cmd_receive_data' from /Library/Ruby/Gems/1.8/gems/taps-0.2.26/lib/taps/client_session.rb:196:in `each' from /Library/Ruby/Gems/1.8/gems/taps-0.2.26/lib/taps/client_session.rb:196:in `cmd_receive_data' from /Library/Ruby/Gems/1.8/gems/taps-0.2.26/lib/taps/client_session.rb:175:in `cmd_receive' from /Library/Ruby/Gems/1.8/gems/heroku-1.8.2/bin/../lib/heroku/commands/db.rb:17:in `pull' from /Library/Ruby/Gems/1.8/gems/heroku-1.8.2/bin/../lib/heroku/commands/db.rb:119:in `taps_client' from /Library/Ruby/Gems/1.8/gems/taps-0.2.26/lib/taps/client_session.rb:21:in `start' from /Library/Ruby/Gems/1.8/gems/heroku-1.8.2/bin/../lib/heroku/commands/db.rb:115:in `taps_client' from /Library/Ruby/Gems/1.8/gems/heroku-1.8.2/bin/../lib/heroku/commands/db.rb:16:in `pull' from /Library/Ruby/Gems/1.8/gems/heroku-1.8.2/bin/../lib/heroku/command.rb:45:in `send' from /Library/Ruby/Gems/1.8/gems/heroku-1.8.2/bin/../lib/heroku/command.rb:45:in `run_internal' from /Library/Ruby/Gems/1.8/gems/heroku-1.8.2/bin/../lib/heroku/command.rb:17:in `run' from /Library/Ruby/Gems/1.8/gems/heroku-1.8.2/bin/heroku:14 from /usr/bin/heroku:19:in `load' from /usr/bin/heroku:19

    Read the article

  • Connection Reset on MySQL query

    - by sunwukung
    OK, I'm flummoxed. I'm trying to execute a query on a database (locally) and I keep getting a connection reset error. I've been using the method below in a generic DAO class to build a query string and pass to Zend_Db API. public function insert($params) { $loop = false; $keys = $values = ''; foreach($params as $k => $v){ if($loop == true){ $keys .= ','; $values .= ','; } $keys .= $this->db->quoteIdentifier($k); $values .= $this->db->quote($v); $loop = true; } $sql = "INSERT INTO " . $this->table_name . " ($keys) VALUES ($values)"; //formatResult returns an array of info regarding the status and any result sets of the query //I've commented that method call out anyway, so I don't think it's that try { $this->db->query($sql); return $this->formatResult(array( true, 'New record inserted into: '.$this->table_name )); }catch(PDOException $e) { return $this->formatResult($e); } } So far, this has worked fine - the errors have been occurring since we generated new tables to record user input. The insert string looks like this: INSERT INTO tablename(`id`,`title`,`summary`,`description`,`keywords`,`type_id`,`categories`) VALUES ('5539','Sample Title','Sample content',' \'Lorem ipsum dolor sit amet, consectetur adipiscing elit. In et pellentesque mauris. Curabitur hendrerit, leo id ultrices pellentesque, est purus mattis ligula, vitae imperdiet neque ligula bibendum sapien. Curabitur aliquet nisi et odio pharetra tincidunt. Phasellus sed iaculis nisl. Fusce commodo mauris et purus vehicula dictum. Nulla feugiat molestie accumsan. Donec fermentum libero in risus tempus elementum aliquam et magna. Fusce vitae sem metus. Aenean commodo pharetra risus, nec pellentesque augue ullamcorper nec. Class aptent taciti sociosqu ad litora torquent per conubia nostra, per inceptos himenaeos. Nullam vel elit libero. Vestibulum in turpis nunc.\'','this,is,a,sample,array',1,'category title') You'll probably notice the big chunk of whitespace before the Lorem Ipsum string. The description field is being populated from a TinyMCE textarea - I'm guessing it's chucking in some line returns, so I've tried stripping those out. However, even if I disable the TinyMCE field, the reset error still occurs. The next port of call was checking the limits on the table, since it seems to insert if the length of "description" is around the 300 mark (it varies between 310 - 330). The field limit is set to VARCHAR(1500) and the validation on this field won't allow anything past bigger than 1200 with HTML, 800 without. The real kicker is that if I take this sql string and execute it via the command line, it works fine - so I can't for the life of me figure out what's wrong. So, in a nutshell, I'm stumped. Any ideas?

    Read the article

  • pdfptable format problem?

    - by raj
    Hi, I am using the code below to convert Html string to PDF.I notice that at times, it does print the tables in PDF but no styles(no border, color etc..).Below is the html string: Can anyone suggest me where am I missing something? Hi userThe Message ID 56456 has been assigned to you for Edition. AUTO VERIFY FOR CONGROUP cccccccc FAILEDSSID ssss message, RPTD BY rrrrSSID ssss RLD ll message, RPTD BY rrrrSSID ssss DEV ddd message, RPTD BY rrrr Table 12 EMC9998W message format <table style="border: medium solid #00FF00; width:100%; table-layout: auto; visibility: visible;" title="EMC9998W message format"> <tr> <td> <b>Exception code</b></td> <td> <b>Meaning</b></td> <td> <b>Message format</b></td> </tr> <tr> <td > 1460 </td> <td> DYNAMIC SPARING INVOKED</td> <td> 1</td> </tr> <tr> <td > 147D REMOTE </td> <td > LINK DIRECTOR PROBLEM/FAILURE</td> <td> 2</td> </tr> </table> Where MSG FORMAT 1 EMC9998W SSID ssss message, RPTD BY rrrr MSG FORMAT 2 EMC9998W SSID ssss message, RPTD BY rrrr MSG FORMAT 3 EMC9998W SSID ssss message, RPTD BY rrrr private void HtmltoPdf(string s, Paragraph p,Document doc) { string strLine = s; byte[] byteArray = Encoding.ASCII.GetBytes(strLine); MemoryStream stream = new MemoryStream(byteArray); StreamReader reader2 = new StreamReader(stream); StringReader sr2 = new StringReader(reader2.ReadToEnd()); iTextSharp.text.html.simpleparser.HTMLWorker worker = new HTMLWorker(doc); ArrayList elementlist = HTMLWorker.ParseToList(sr2, null); Phrase ph = new Phrase(); for (int k = 0; k < elementlist.Count; ++k) { IElement ielement = (IElement)elementlist[k]; ArrayList chunks = ielement.Chunks; if (ielement.Type == Element.PTABLE) { PdfPTable pt = (PdfPTable)ielement; pt.DefaultCell.Border = 2; PdfPTable t = new PdfPTable(1);//(new float[] {1f,1f,1f}); ph.Add(t); PdfPCell pcell = new PdfPCell(new Paragraph(ph)); t.AddCell(pcell); p.Add(t); foreach (PdfPRow row in t.Rows) { } } else if (ielement.Type == Element.LIST) { } else { ph.Clear(); ph.Add((IElement)elementlist[k]); p.Add(new Paragraph(ph)); } } sr2.Close(); reader2.Close(); stream.Close(); }

    Read the article

  • Doctrine Build-All Task fails in NetBeans - Class not found! Fatal Error: call to evictAll()

    - by Prasad
    When I build my model with the symfony doctrine:build --all --and-load command I have made no major changes to the model/schema, this is something new. I also tried sub-commands like build-model, build-tables, but they all hang.. I'm trying this in net beans. Any clue what this is? This command will remove all data in the following "dev" connection(s): - doctrine Are you sure you want to proceed? (y/N) y >> doctrine Dropping "doctrine" database >> doctrine Creating "dev" environment "doctrine" database >> doctrine generating model classes >> file+ C:\Documents and Settings\Gupte...\Temp/doctrine_schema_69845.yml >> tokens D:/projects/cim/lib/model/doctrine/base/BaseAffiliate.class.php >> tokens D:/projects/cim/lib/model/doctrine/base/BaseContact.class.php >> tokens D:/projects/cim/lib/model/doctr...e/BaseContactLocation.class.php >> tokens D:/projects/cim/lib/model/doctr...se/BaseGroupAffiliate.class.php >> tokens D:/projects/cim/lib/model/doctrine/base/BaseGrouping.class.php >> tokens D:/projects/cim/lib/model/doctrine/base/BaseLocation.class.php >> tokens D:/projects/cim/lib/model/doctr.../base/BasePhonenumber.class.php >> tokens D:/projects/cim/lib/model/doctrine/base/BaseTenant.class.php >> tokens D:/projects/cim/lib/model/doctr...base/BasesfGuardGroup.class.php >> tokens D:/projects/cim/lib/model/doctr...fGuardGroupPermission.class.php >> tokens D:/projects/cim/lib/model/doctr...BasesfGuardPermission.class.php >> tokens D:/projects/cim/lib/model/doctr...asesfGuardRememberKey.class.php >> tokens D:/projects/cim/lib/model/doctr.../base/BasesfGuardUser.class.php >> tokens D:/projects/cim/lib/model/doctr.../BasesfGuardUserGroup.class.php >> tokens D:/projects/cim/lib/model/doctr...sfGuardUserPermission.class.php >> autoload Resetting application autoloaders >> file- D:/projects/cim/cache/frontend/.../config/config_autoload.yml.php >> file- D:/projects/cim/cache/backend/dev/config/config_autoload.yml.php >> doctrine generating form classes [?php /** * Contact form base class. * * @method Contact getObject() Returns the current form's model object * * @package ##PROJECT_NAME## * @subpackage form * @author ##AUTHOR_NAME## * @version SVN: $Id: sfDoctrineFormGeneratedTemplate.php 24171 2009-11-19 16:37:50Z Kris.Wallsmith $ */ abstract class BaseContactForm extends BaseFormDoctrine { public function setup() { $this->setWidgets(array( 'id' Fatal error: Call to a member function evictAll() on a non-object in D:\projects\cim\lib\vendor\symfony\lib\plugins\sfDoctrinePlugin\lib\vendor\doctrine\Doctrine\Connection.php on line 1239 Call Stack: 0.9552 322760 1. {main}() D:\projects\cim\symfony:0 0.9594 587208 2. include('D:\projects\cim\lib\vendor\symfony\lib\command\cli.php') D:\projects\cim\symfony:14 11.9775 17118936 3. sfDatabaseManager->shutdown() D:\projects\cim\lib\vendor\symfony\lib\database\sfDatabaseManager.class.php:0 11.9775 17118936 4. sfDoctrineDatabase->shutdown() D:\projects\cim\lib\vendor\symfony\lib\database\sfDatabaseManager.class.php:134 11.9775 17118936 5. Doctrine_Manager->closeConnection() D:\projects\cim\lib\vendor\symfony\lib\plugins\sfDoctrinePlugin\lib\database\sfDoctrineDatabase.class.php:165 11.9775 17118936 6. Doctrine_Connection->close() D:\projects\cim\lib\vendor\symfony\lib\plugins\sfDoctrinePlugin\lib\vendor\doctrine\Doctrine\Manager.php:579 11.9776 17120160 7. Doctrine_Connection->clear() D:\projects\cim\lib\vendor\symfony\lib\plugins\sfDoctrinePlugin\lib\vendor\doctrine\Doctrine\Connection.php:1268 Couldn't find class Similar thing is mentioned here: http://osdir.com/ml/symfony-users/2010-01/msg00642.html

    Read the article

  • Rails on server syntax error?

    - by Danny McClelland
    Hi Everyone, I am trying to get my rails application running on my web server, but when I run the rake db:migrate I get the following error: r oot@oak [/home/macandco/rails_apps/survey_manager]# rake db:migrate (in /home/macandco/rails_apps/survey_manager) == Baseapp: migrating ======================================================== -- create_table(:settings, {:force=>true}) -> 0.0072s -- create_table(:users) -> 0.0072s -- add_index(:users, :login, {:unique=>true}) -> 0.0097s -- create_table(:profiles) -> 0.0084s -- create_table(:open_id_authentication_associations, {:force=>true}) -> 0.0067s -- create_table(:open_id_authentication_nonces, {:force=>true}) -> 0.0064s -- create_table(:roles) -> 0.0052s -- create_table(:roles_users, {:id=>false}) -> 0.0060s rake aborted! An error has occurred, all later migrations canceled: 555 5.5.2 Syntax error. g9sm2526951gvc.8 Has anyone come across this before? Thanks, Danny Main Migration file c lass Baseapp < ActiveRecord::Migration def self.up # Create Settings Table create_table :settings, :force => true do |t| t.string :label t.string :identifier t.text :description t.string :field_type, :default => 'string' t.text :value t.timestamps end # Create Users Table create_table :users do |t| t.string :login, :limit => 40 t.string :identity_url t.string :name, :limit => 100, :default => '', :null => true t.string :email, :limit => 100 t.string :mobile t.string :signaturenotes t.string :crypted_password, :limit => 40 t.string :salt, :limit => 40 t.string :remember_token, :limit => 40 t.string :activation_code, :limit => 40 t.string :state, :null => :false, :default => 'passive' t.datetime :remember_token_expires_at t.string :password_reset_code, :default => nil t.datetime :activated_at t.datetime :deleted_at t.timestamps end add_index :users, :login, :unique => true # Create Profile Table create_table :profiles do |t| t.references :user t.string :real_name t.string :location t.string :website t.string :mobile t.timestamps end # Create OpenID Tables create_table :open_id_authentication_associations, :force => true do |t| t.integer :issued, :lifetime t.string :handle, :assoc_type t.binary :server_url, :secret end create_table :open_id_authentication_nonces, :force => true do |t| t.integer :timestamp, :null => false t.string :server_url, :null => true t.string :salt, :null => false end create_table :roles do |t| t.column :name, :string end # generate the join table create_table :roles_users, :id => false do |t| t.column :role_id, :integer t.column :user_id, :integer end # Create admin role and user admin_role = Role.create(:name => 'admin') user = User.create do |u| u.login = 'admin' u.password = u.password_confirmation = 'advices' u.email = '[email protected]' end user.register! user.activate! user.roles << admin_role end def self.down # Drop all BaseApp drop_table :settings drop_table :users drop_table :profiles drop_table :open_id_authentication_associations drop_table :open_id_authentication_nonces drop_table :roles drop_table :roles_users end end

    Read the article

  • How to get table cells evenly spaced?

    - by DaveDev
    I'm trying to create a page with a number of static html tables on them. What do I need to do to get them to display each column the same size as each other column in the table? The HTML is as follows: <span class="Emphasis">Interest rates</span><br /> <table cellpadding="0px" cellspacing="0px" class="PerformanceTable"> <tr><th class="TableHeader"></th><th class="TableHeader">Current rate as at 31 March 2010</th><th class="TableHeader">31 December 2009</th><th class="TableHeader">31 March 2009</th></tr> <tr class="TableRow"><td>Index1</td><td class="PerformanceCell">1.00%</td><td>1.00%</td><td>1.50%</td></tr> <tr class="TableRow"><td>index2</td><td class="PerformanceCell">0.50%</td><td>0.50%</td><td>0.50%</td></tr> <tr class="TableRow"><td>index3</td><td class="PerformanceCell">0.25%</td><td>0.25%</td><td>0.25%</td></tr> </table> <span>Source: Bt</span><br /><br /> <span class="Emphasis">Stock markets</span><br /> <table cellpadding="0px" cellspacing="0px" class="PerformanceTable"> <tr><th class="TableHeader"></th><th class="TableHeader">As at 31 March 2010</th><th class="TableHeader">1 month change</th><th class="TableHeader">QTD change</th><th class="TableHeader">12 months change</th></tr> <tr class="TableRow"><td>index1</td><td class="PerformanceCell">1169.43</td><td class="PerformanceCell">5.88%</td><td class="PerformanceCell">4.87%</td><td class="PerformanceCell">46.57%</td></tr> <tr class="TableRow"><td>index2</td><td class="PerformanceCell">1958.34</td><td class="PerformanceCell">7.68%</td><td class="PerformanceCell">5.27%</td><td class="PerformanceCell">58.31%</td></tr> <tr class="TableRow"><td>index3</td><td class="PerformanceCell">5679.64</td><td class="PerformanceCell">6.07%</td><td class="PerformanceCell">4.93%</td><td class="PerformanceCell">44.66%</td></tr> <tr class="TableRow"><td>index4</td><td class="PerformanceCell">2943.92</td><td class="PerformanceCell">8.30%</td><td class="PerformanceCell">-0.98%</td><td class="PerformanceCell">44.52%</td></tr> <tr class="TableRow"><td>index5</td><td class="PerformanceCell">978.81</td><td class="PerformanceCell">9.47%</td><td class="PerformanceCell">7.85%</td><td class="PerformanceCell">26.52%</td></tr> <tr class="TableRow"><td>index6</td><td class="PerformanceCell">3177.77</td><td class="PerformanceCell">10.58%</td><td class="PerformanceCell">6.82%</td><td class="PerformanceCell">44.84%</td></tr> </table> <span>Source: B</span><br /><br /> I'm also open to suggestion on how to tidy this up, if there are any? :-)

    Read the article

  • Converting hierarchial data into an unordered list Programmatically using asp.net/C#

    - by kranthi
    hi everyone, I've data which looks something like this. | id | name | depth | itemId | +-----+----------------------+-------+-------+ | 0 | ELECTRONICS | 0 | NULL | | 1 | TELEVISIONS | 1 | NULL | | 400 | Tube | 2 | NULL | | 432 | LCD | 3 | 1653 | | 422 | Plasma | 3 | 1633 | | 416 | Portable electronics | 3 | 1595 | | 401 | MP3 Player | 3 | 1249 | | 191 | Flash | 2 | NULL | | 555 | CD Players | 3 | 2198 | | 407 | 2 Way Radio | 3 | 1284 | | 388 | I've a problem with | 3 | 1181 | | 302 | What is your bill pa | 3 | 543 | | 203 | Where can I find my | 3 | 299 | | 201 | I would like to make | 3 | 288 | | 200 | Do you have any job | 3 | 284 | | 192 | About Us | 3 | NULL | | 199 | What can you tell me | 4 | 280 | | 198 | Do you help pr | 4 | 276 | | 197 | would someone help co| 4 | 272 | | 196 | can you help ch | 4 | 268 | | 195 | What awards has Veri | 4 | 264 | | 194 | What's the latest ne | 4 | 260 | | 193 | Can you tell me more | 4 | 256 | | 180 | Site Help | 2 | NULL | | 421 | Where are the | 3 | 1629 | | 311 | How can I access My | 3 | 557 | | 280 | Why isn't the page a | 3 | 512 | To convert the above data into unordered list based on depth, I'm using the following code int lastDepth = -1; int numUL = 0; StringBuilder output = new StringBuilder(); foreach (DataRow row in ds.Tables[0].Rows) { int currentDepth = Convert.ToInt32(row["Depth"]); if (lastDepth < currentDepth) { if (currentDepth == 0) { output.Append("<ul class=\"simpleTree\">"); output.AppendFormat("<li class=\"root\"><span><a href=\"#\" title=\"root\">root</a></span><ul><li class=\"open\" ><span><a href=\"#\" title={1}>{0}</a></span>", row["name"],row["id"]); } else { output.Append("<ul>"); if(currentDepth==1) output.AppendFormat("<li><span>{0}</span>", row["name"]); else output.AppendFormat("<li><span class=\"text\"><a href=\"#\" title={1}>{0}</a></span>", row["name"], row["id"]); } numUL++; } else if (lastDepth > currentDepth) { output.Append("</li></ul></li>"); if(currentDepth==1) output.AppendFormat("<li><span>{0}</span>", row["name"]); else output.AppendFormat("<li><span class=\"text\"><a href=\"#\" title={1}>{0}</a></span>", row["name"], row["id"]); numUL--; } else if (lastDepth > -1) { output.Append("</li>"); output.AppendFormat("<li><span class=\"text\"><a href=\"#\" title={1}>{0}</a></span>", row["name"],row["id"]); } lastDepth = currentDepth; } for (int i = 1; i <= numUL+1; i++) { output.Append("</li></ul>"); } myliteral.text=output.ToString(); But the resulting unordered list doesnt seem to be forming properly(using which i am constructing a tree).For example "Site Help" with id '180' is supposed to appear as a direct child of "Televisions" with id '1',is appearing as a direct child of 'Flash' with id '191' using my code.so in addition to considering depth,I've decided to consider itemid as well in order to get the treeview properly.Those rows of the table with itemId not equal to null are not supposed to have a child node(i.e.,they are the leaf nodes in the tree) and all the other nodes can have child nodes. Please could someone help me in constructing a proper unordered list based on my depth,itemid coulumns? Thanks.

    Read the article

  • Hibernate : Foreign key constraint violation problem

    - by Vinze
    I have a com.mysql.jdbc.exceptions.MySQLIntegrityConstraintViolationException in my code (using Hibernate and Spring) and I can't figure why. My entities are Corpus and Semspace and there's a many-to-one relation from Semspace to Corpus as defined in my hibernate mapping configuration : <class name="xxx.entities.Semspace" table="Semspace" lazy="false" batch-size="30"> <id name="id" column="idSemspace" type="java.lang.Integer" unsaved-value="null"> <generator class="identity"/> </id> <property name="name" column="name" type="java.lang.String" not-null="true" unique="true" /> <many-to-one name="corpus" class="xxx.entities.Corpus" column="idCorpus" insert="false" update="false" /> [...] </class> <class name="xxx.entities.Corpus" table="Corpus" lazy="false" batch-size="30"> <id name="id" column="idCorpus" type="java.lang.Integer" unsaved-value="null"> <generator class="identity"/> </id> <property name="name" column="name" type="java.lang.String" not-null="true" unique="true" /> </class> And the Java code generating the exception is : Corpus corpus = Spring.getCorpusDAO().getCorpusById(corpusId); Semspace semspace = new Semspace(); semspace.setCorpus(corpus); semspace.setName(name); Spring.getSemspaceDAO().save(semspace); I checked and the corpus variable is not null (so it is in database as retrieved with the DAO) The full exception is : com.mysql.jdbc.exceptions.MySQLIntegrityConstraintViolationException: Cannot add or update a child row: a foreign key constraint fails (`xxx/Semspace`, CONSTRAINT `FK4D6019AB6556109` FOREIGN KEY (`idCorpus`) REFERENCES `Corpus` (`idCorpus`)) at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:931) at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:2941) at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:1623) at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:1715) at com.mysql.jdbc.Connection.execSQL(Connection.java:3249) at com.mysql.jdbc.PreparedStatement.executeInternal(PreparedStatement.java:1268) at com.mysql.jdbc.PreparedStatement.executeUpdate(PreparedStatement.java:1541) at com.mysql.jdbc.PreparedStatement.executeUpdate(PreparedStatement.java:1455) at com.mysql.jdbc.PreparedStatement.executeUpdate(PreparedStatement.java:1440) at org.apache.commons.dbcp.DelegatingPreparedStatement.executeUpdate(DelegatingPreparedStatement.java:102) at org.hibernate.id.IdentityGenerator$GetGeneratedKeysDelegate.executeAndExtract(IdentityGenerator.java:73) at org.hibernate.id.insert.AbstractReturningDelegate.performInsert(AbstractReturningDelegate.java:33) at org.hibernate.persister.entity.AbstractEntityPersister.insert(AbstractEntityPersister.java:2158) at org.hibernate.persister.entity.AbstractEntityPersister.insert(AbstractEntityPersister.java:2638) at org.hibernate.action.EntityIdentityInsertAction.execute(EntityIdentityInsertAction.java:48) at org.hibernate.engine.ActionQueue.execute(ActionQueue.java:250) at org.hibernate.event.def.AbstractSaveEventListener.performSaveOrReplicate(AbstractSaveEventListener.java:298) at org.hibernate.event.def.AbstractSaveEventListener.performSave(AbstractSaveEventListener.java:181) at org.hibernate.event.def.AbstractSaveEventListener.saveWithGeneratedId(AbstractSaveEventListener.java:107) at org.hibernate.event.def.DefaultSaveOrUpdateEventListener.saveWithGeneratedOrRequestedId(DefaultSaveOrUpdateEventListener.java:187) at org.hibernate.event.def.DefaultSaveEventListener.saveWithGeneratedOrRequestedId(DefaultSaveEventListener.java:33) at org.hibernate.event.def.DefaultSaveOrUpdateEventListener.entityIsTransient(DefaultSaveOrUpdateEventListener.java:172) at org.hibernate.event.def.DefaultSaveEventListener.performSaveOrUpdate(DefaultSaveEventListener.java:27) at org.hibernate.event.def.DefaultSaveOrUpdateEventListener.onSaveOrUpdate(DefaultSaveOrUpdateEventListener.java:70) at org.hibernate.impl.SessionImpl.fireSave(SessionImpl.java:535) at org.hibernate.impl.SessionImpl.save(SessionImpl.java:523) at org.hibernate.impl.SessionImpl.save(SessionImpl.java:519) at org.springframework.orm.hibernate3.HibernateTemplate$12.doInHibernate(HibernateTemplate.java:642) at org.springframework.orm.hibernate3.HibernateTemplate.execute(HibernateTemplate.java:373) at org.springframework.orm.hibernate3.HibernateTemplate.save(HibernateTemplate.java:639) at xxx.dao.impl.AbstractDAO.save(AbstractDAO.java:26) at org.apache.jsp.functions.semspaceManagement_jsp._jspService(semspaceManagement_jsp.java:218) [...] The foreign key constraint has been created (and added to database) by hibernate and I don't see where the constraint can be violated. The table are innodb and I tried to drop all tables and recreate it the problem remains... EDIT : Well I think I have a start of answer... I change the log level of hibernate to DEBUG and before it crash I have the following log insert into Semspace (name, [...]) values (?, [...]) So it looks like it does not try to insert the idCorpus and as it is not null it uses the default value "0" which does not refers to an existing entry in Corpus table...

    Read the article

< Previous Page | 573 574 575 576 577 578 579 580 581 582 583 584  | Next Page >