Search Results

Search found 3489 results on 140 pages for 'summary'.

Page 120/140 | < Previous Page | 116 117 118 119 120 121 122 123 124 125 126 127  | Next Page >

  • DNS lookups failing somewhere between firewall and router

    - by TessellatingHeckler
    we have a setup of ADSL line - Cisco 837 ADSL router - Zyxel ZyWall 35 firewall/NAT - Switch == Intel load balanced NICS in a server. It has been fine for years, suddenly DNS resolution stopped working on the server. No changes that I know of, so I can't work backwards from there. It was configured with the ISP's DNS servers, neither network device does DNS relaying. Wireshark shows the request go out but nothing comes back. The server networking stack seems OK though, because if we query an internal DNS server on a remote site, that works. I can logon to the Cisco, and DNS resolves OK from the command line. I can logon to the ZyWall, and DNS does not resolve from the command line. So the problem seems to be the firewall, patch cable or router, yes? On the router: interface Ethernet0 ip address aaa.bbb.ccc.ddd 255.255.255.ddd ip tcp adjust-mss 1450 hold-queue 100 out On the firewall: DNS server set to 8.8.8.8 (Google's), DNS traffic allowed LAN-WAN. What else should I look for? Update: Following This guide I've got traffic logging on the Cisco. I have also got access to a public DNS server which I can run tcpdump on to see things from the other side. And as per the below comments, I've tested with Dig and see that DNS over TCP works, and over UDP does not. Currently: DNS request from the server using TCP shows up in the firewall log, and in the Cisco log, and in tcpdump on the DNS server, the answer comes back, it works fine. DNS request from the server using UDP shows up in the firewall log, and in the Cisco log, does NOT show in tcpdump on the DNS server, times out. DNS request from the cisco (using UDP) does show up in tcpdump on the DNS server, answer received, works fine. Ping requests from the server and the cisco to the DNS server show up in tcpdump on the DNS server. DNS request from the server using UDP does show up on the firewall. Summary: TCP seems fine throughought. UDP works over the ADSL and to the Cisco, and it works from the server to the Cisco, but it doesn't cross the Cisco properly, it seems. I did see the Cisco showing as connected at 10Mb/full-duplex internally, and the firewall showing as 100Mb/full-duplex externally. I have forced the firewall to 10Mb and rebooted both devices. That seemed to help get UDP traffic (server-firewall-cisco) instead of (server-firewall), but did not fix it. Update: Sanitized Cisco config: version 12.2 no service pad service timestamps debug datetime msec service timestamps log datetime msec service password-encryption ! hostname cisco ! logging queue-limit 100 enable secret 5 {password} enable password 7 {password} ! ip subnet-zero ip domain name example.org ip name-server {nameserver_IP} ! ! ip audit notify log ip audit po max-events 100 no ftp-server write-enable ! interface Ethernet0 ip address {Inside_public_IP} 255.255.255.248 ip tcp adjust-mss 1460 hold-queue 100 out ! interface ATM0 no ip address no atm ilmi-keepalive pvc 0/38 encapsulation aal5mux ppp dialer dialer pool-member 1 ! dsl operating-mode auto ! interface Dialer1 ip unnumbered Ethernet0 encapsulation ppp dialer pool 1 dialer idle-timeout 0 dialer persistent no cdp enable ppp chap hostname {ADSL_Username} ppp chap password 7 {ADSL_Password} ! ip classless ip route 0.0.0.0 0.0.0.0 Dialer1 no ip http server no ip http secure-server ! access-list 23 permit {IP} dialer-list 1 protocol ip permit no cdp run snmp-server enable traps tty ! {con, vty} end

    Read the article

  • Upgraded Ubuntu, all drives in one zpool marked unavailable

    - by Matt Sieker
    I just upgraded Ubuntu 14.04, and I had two ZFS pools on the server. There was some minor issue with me fighting with the ZFS driver and the kernel version, but that's worked out now. One pool came online, and mounted fine. The other didn't. The main difference between the tool is one was just a pool of disks (video/music storage), and the other was a raidz set (documents, etc) I've already attempted exporting and re-importing the pool, to no avail, attempting to import gets me this: root@kyou:/home/matt# zpool import -fFX -d /dev/disk/by-id/ pool: storage id: 15855792916570596778 state: UNAVAIL status: One or more devices contains corrupted data. action: The pool cannot be imported due to damaged devices or data. see: http://zfsonlinux.org/msg/ZFS-8000-5E config: storage UNAVAIL insufficient replicas raidz1-0 UNAVAIL insufficient replicas ata-SAMSUNG_HD103SJ_S246J90B134910 UNAVAIL ata-WDC_WD10EARS-00Y5B1_WD-WMAV51422523 UNAVAIL ata-WDC_WD10EARS-00Y5B1_WD-WMAV51535969 UNAVAIL The symlinks for those in /dev/disk/by-id also exist: root@kyou:/home/matt# ls -l /dev/disk/by-id/ata-SAMSUNG_HD103SJ_S246J90B134910* /dev/disk/by-id/ata-WDC_WD10EARS-00Y5B1_WD-WMAV51* lrwxrwxrwx 1 root root 9 May 27 19:31 /dev/disk/by-id/ata-SAMSUNG_HD103SJ_S246J90B134910 -> ../../sdb lrwxrwxrwx 1 root root 10 May 27 19:15 /dev/disk/by-id/ata-SAMSUNG_HD103SJ_S246J90B134910-part1 -> ../../sdb1 lrwxrwxrwx 1 root root 10 May 27 19:15 /dev/disk/by-id/ata-SAMSUNG_HD103SJ_S246J90B134910-part9 -> ../../sdb9 lrwxrwxrwx 1 root root 9 May 27 19:15 /dev/disk/by-id/ata-WDC_WD10EARS-00Y5B1_WD-WMAV51422523 -> ../../sdd lrwxrwxrwx 1 root root 10 May 27 19:15 /dev/disk/by-id/ata-WDC_WD10EARS-00Y5B1_WD-WMAV51422523-part1 -> ../../sdd1 lrwxrwxrwx 1 root root 10 May 27 19:15 /dev/disk/by-id/ata-WDC_WD10EARS-00Y5B1_WD-WMAV51422523-part9 -> ../../sdd9 lrwxrwxrwx 1 root root 9 May 27 19:15 /dev/disk/by-id/ata-WDC_WD10EARS-00Y5B1_WD-WMAV51535969 -> ../../sde lrwxrwxrwx 1 root root 10 May 27 19:15 /dev/disk/by-id/ata-WDC_WD10EARS-00Y5B1_WD-WMAV51535969-part1 -> ../../sde1 lrwxrwxrwx 1 root root 10 May 27 19:15 /dev/disk/by-id/ata-WDC_WD10EARS-00Y5B1_WD-WMAV51535969-part9 -> ../../sde9 Inspecting the various /dev/sd* devices listed, they appear to be the correct ones (The 3 1TB drives that were in a raidz array). I've run zdb -l on each drive, dumping it to a file, and running a diff. The only difference on the 3 are the guid fields (Which I assume is expected). All 3 labels on each one are basically identical, and are as follows: version: 5000 name: 'storage' state: 0 txg: 4 pool_guid: 15855792916570596778 hostname: 'kyou' top_guid: 1683909657511667860 guid: 8815283814047599968 vdev_children: 1 vdev_tree: type: 'raidz' id: 0 guid: 1683909657511667860 nparity: 1 metaslab_array: 33 metaslab_shift: 34 ashift: 9 asize: 3000569954304 is_log: 0 create_txg: 4 children[0]: type: 'disk' id: 0 guid: 8815283814047599968 path: '/dev/disk/by-id/ata-SAMSUNG_HD103SJ_S246J90B134910-part1' whole_disk: 1 create_txg: 4 children[1]: type: 'disk' id: 1 guid: 18036424618735999728 path: '/dev/disk/by-id/ata-WDC_WD10EARS-00Y5B1_WD-WMAV51422523-part1' whole_disk: 1 create_txg: 4 children[2]: type: 'disk' id: 2 guid: 10307555127976192266 path: '/dev/disk/by-id/ata-WDC_WD10EARS-00Y5B1_WD-WMAV51535969-part1' whole_disk: 1 create_txg: 4 features_for_read: Stupidly, I do not have a recent backup of this pool. However, the pool was fine before reboot, and Linux sees the disks fine (I have smartctl running now to double check) So, in summary: I upgraded Ubuntu, and lost access to one of my two zpools. The difference between the pools is the one that came up was JBOD, the other was zraid. All drives in the unmountable zpool are marked UNAVAIL, with no notes for corrupted data The pools were both created with disks referenced from /dev/disk/by-id/. Symlinks from /dev/disk/by-id to the various /dev/sd devices seems to be correct zdb can read the labels from the drives. Pool has already been attempted to be exported/imported, and isn't able to import again. Is there some sort of black magic I can invoke via zpool/zfs to bring these disks back into a reasonable array? Can I run zpool create zraid ... without losing my data? Is my data gone anyhow?

    Read the article

  • Installing chrome in Centos 6.2 (Final)

    - by usjes
    I need to install chrome in a dedicated centos server where I only access via ssh, it doesn't have X or any windows graphical stuff. I need it to be able to pack extensions using google-chrome --pack-extension. I tried adding this to /etc/yum.repos.d/google.repo [google-chrome] name=google-chrome - 32-bit baseurl=http://dl.google.com/linux/chrome/rpm/stable/i386 enabled=1 gpgcheck=1 gpgkey=https://dl-ssl.google.com/linux/linux_signing_key.pub And then yum install google-chrome-stable, but there's a huge list of dependencies problems: How can I install chrome without breaking anything else? UPDATE: Ok, I installed perl-CGI from .rpm because yum couldn't find it, now dependencies resolve and it show me this list of packages to install: Dependencies Resolved ============================================================================================================================================================================================================================================= Package Arch Version Repository Size ============================================================================================================================================================================================================================================= Installing: google-chrome-stable x86_64 19.0.1084.52-138391 google-chrome 35 M Installing for dependencies: ConsoleKit x86_64 0.4.1-3.el6 base 82 k ConsoleKit-libs x86_64 0.4.1-3.el6 base 17 k GConf2 x86_64 2.28.0-6.el6 base 964 k ORBit2 x86_64 2.14.17-3.1.el6 base 168 k bc x86_64 1.06.95-1.el6 base 110 k cdparanoia-libs x86_64 10.2-5.1.el6 base 47 k cups x86_64 1:1.4.2-44.el6_2.3 updates 2.3 M dbus x86_64 1:1.2.24-5.el6_1 base 207 k desktop-file-utils x86_64 0.15-9.el6 base 47 k ed x86_64 1.1-3.3.el6 base 72 k eggdbus x86_64 0.6-3.el6 base 91 k foomatic x86_64 4.0.4-1.el6_1.1 base 251 k foomatic-db noarch 4.0-7.20091126.el6 base 980 k foomatic-db-filesystem noarch 4.0-7.20091126.el6 base 4.4 k foomatic-db-ppds noarch 4.0-7.20091126.el6 base 19 M ghostscript x86_64 8.70-11.el6_2.6 updates 4.4 M ghostscript-fonts noarch 5.50-23.1.el6 base 751 k gstreamer x86_64 0.10.29-1.el6 base 764 k gstreamer-plugins-base x86_64 0.10.29-1.el6 base 942 k gstreamer-tools x86_64 0.10.29-1.el6 base 23 k iso-codes noarch 3.16-2.el6 base 2.4 M lcms-libs x86_64 1.19-1.el6 base 100 k libIDL x86_64 0.8.13-2.1.el6 base 83 k libXScrnSaver x86_64 1.2.0-1.el6 base 19 k libXfont x86_64 1.4.1-2.el6_1 base 128 k libXv x86_64 1.0.5-1.el6 base 21 k libfontenc x86_64 1.0.5-2.el6 base 24 k libgudev1 x86_64 147-2.40.el6 base 59 k libmng x86_64 1.0.10-4.1.el6 base 165 k libogg x86_64 2:1.1.4-2.1.el6 base 21 k liboil x86_64 0.3.16-4.1.el6 base 121 k libtheora x86_64 1:1.1.0-2.el6 base 129 k libvisual x86_64 0.4.0-9.1.el6 base 135 k libvorbis x86_64 1:1.2.3-4.el6_2.1 updates 168 k mailx x86_64 12.4-6.el6 base 234 k man x86_64 1.6f-29.el6 base 263 k mesa-libGLU x86_64 7.11-3.el6 base 201 k nvidia-graphics195.30-libs x86_64 195.30-120.el6 atrpms 13 M openjpeg-libs x86_64 1.3-7.el6 base 59 k pax x86_64 3.4-10.1.el6 base 69 k phonon-backend-gstreamer x86_64 1:4.6.2-20.el6 base 125 k polkit x86_64 0.96-2.el6_0.1 base 158 k poppler x86_64 0.12.4-3.el6_0.1 base 557 k poppler-data noarch 0.4.0-1.el6 base 2.2 M poppler-utils x86_64 0.12.4-3.el6_0.1 base 73 k portreserve x86_64 0.0.4-4.el6_1.1 base 22 k qt x86_64 1:4.6.2-20.el6 base 4.0 M qt-sqlite x86_64 1:4.6.2-20.el6 base 50 k qt-x11 x86_64 1:4.6.2-20.el6 base 12 M qt3 x86_64 3.3.8b-30.el6 base 3.5 M redhat-lsb x86_64 4.0-3.el6.centos base 24 k redhat-lsb-graphics x86_64 4.0-3.el6.centos base 12 k redhat-lsb-printing x86_64 4.0-3.el6.centos base 11 k sgml-common noarch 0.6.3-32.el6 base 43 k time x86_64 1.7-37.1.el6 base 26 k tmpwatch x86_64 2.9.16-4.el6 base 31 k xdg-utils noarch 1.0.2-17.20091016cvs.el6 base 58 k xml-common noarch 0.6.3-32.el6 base 9.5 k xorg-x11-font-utils x86_64 1:7.2-11.el6 base 75 k xz x86_64 4.999.9-0.3.beta.20091007git.el6 base 137 k xz-lzma-compat x86_64 4.999.9-0.3.beta.20091007git.el6 base 16 k Transaction Summary ============================================================================================================================================================================================================================================= Install 62 Package(s) Is it safe to continue and install all that or could I break something already installed?

    Read the article

  • Forwarding rsyslog to syslog-ng, with FQDN and facility separation

    - by Joshua Miller
    I'm attempting to configure my rsyslog clients to forward messages to my syslog-ng log repository systems. Forwarding messages works "out of the box", but my clients are logging short names, not FQDNs. As a result the messages on the syslog repo use short names as well, which is a problem because one can't determine which system the message originated from easily. My clients get their names through DHCP / DNS. I've tried a number of solutions trying to get this working, but without success. I'm using rsyslog 4.6.2 and syslog-ng 3.2.5. I've tried setting $PreserveFQDN on as the first directive in /etc/rsyslog.conf (and restarting rsyslog of course). It seems to have no effect. hostname --fqdn on the client returns the proper FQDN, so the problem isn't whether the system can actually figure out its own FQDN. $LocalHostName <fqdn> looked promising, but this directive isn't available in my version of rsyslog (Available since 4.7.4+, 5.7.3+, 6.1.3+). Upgrading isn't an option at the moment. Configuring the syslog-ng server to populate names based on reverse lookups via DNS isn't an option. There are complexities with reverse DNS and the public cloud. Specifying for the forwarder to use a custom template seems like a viable option at first glance. I can specify the following, which causes local logging to begin using the FQDN on the syslog-ng repo. $template MyTemplate, "%timestamp% <FQDN> %syslogtag%%msg%" $ActionForwardDefaultTemplate MyTemplate However, when I put this in place syslog-ng seems to be unable to categorize messages by facility or priority. Messages come in as FQDN, but everything is put in to user.log. When I don't use the custom template, messages are properly categorized under facility and priority, but with the short name. So, in summary, if I manually trick rsyslog into including the FQDN, priority and facility becomes lost details to syslog-ng. How can I get rsyslog to do FQDN logging which works properly going to a syslog-ng repository? rsyslog client config: $ModLoad imuxsock.so # provides support for local system logging (e.g. via logger command) $ModLoad imklog.so # provides kernel logging support (previously done by rklogd) $ActionFileDefaultTemplate RSYSLOG_TraditionalFileFormat *.info;mail.none;authpriv.none;cron.none /var/log/messages authpriv.* /var/log/secure mail.* -/var/log/maillog cron.* /var/log/cron *.emerg * uucp,news.crit /var/log/spooler local7.* /var/log/boot.log $WorkDirectory /var/spool/rsyslog # where to place spool files $ActionQueueFileName fwdRule1 # unique name prefix for spool files $ActionQueueMaxDiskSpace 1g # 1gb space limit (use as much as possible) $ActionQueueSaveOnShutdown on # save messages to disk on shutdown $ActionQueueType LinkedList # run asynchronously $ActionResumeRetryCount -1 # infinite retries if host is down *.* @syslog-ng1.example.com *.* @syslog-ng2.example.com syslog-ng configuration (abridged for brevity): options { flush_lines (0); time_reopen (10); log_fifo_size (1000); long_hostnames (off); use_dns (no); use_fqdn (yes); create_dirs (no); keep_hostname (yes); }; source src { unix-stream("/dev/log"); internal(); udp(ip(0.0.0.0) port(514)); }; destination per_host_destination { file( "/var/log/syslog-ng/devices/$HOST/$FACILITY.log" owner("root") group("root") perm(0644) dir_owner(root) dir_group(root) dir_perm(0775) create_dirs(yes)); }; log { source(src); destination(per_facility_destination); };

    Read the article

  • ubuntu mount iso but some files are unreadable

    - by Chao
    I'm new to Linux and just installed ubuntu 12.04 amd64 this month. I had failed installing Texlive with texlive2012 iso image. I used the recommended command to mount: "mount -t iso9660 -o ro,loop,noauto /your/texlive2012.iso /mnt " But the installer failed to read some file. The iso is fine, I checked the md5. I extracted everything from iso with archive manager, and it installed successfully. So, why mount is not working? Thanks. UPDATE with furius iso mount tool, mount with Fuse, and it installed, with warning: Summary of warning messages during installation: Downloaded ./archive/calligra-type1.tar.xz, size equal, but md5sum differs; downloading again. While mount with Loop, it failed to install. Updated Error message from terminal, mounted with furius iso mount, loop. texlive2012-20120701_iso$ ./install-tl -gui Loading ./tlpkg/texlive.tlpdb Installing TeX Live 2012 from: . Platform: x86_64-linux = 'x86_64 with GNU/Linux' Distribution: inst (compressed) Directory for temporary files: /tmp Installing [0001/2481, time/total: ??:??/??:??]: 12many [3k] Installing [0002/2481, time/total: 00:00/00:00]: 2up [4k] Installing [0003/2481, time/total: 00:00/00:00]: Asana-Math [457k] Installing [0004/2481, time/total: 00:00/00:00]: ESIEEcv [2k] ... Installing [0265/2481, time/total: 00:10/01:09]: calctab [5k] Installing [0266/2481, time/total: 00:10/01:09]: calligra [42k] Installing [0267/2481, time/total: 00:10/01:09]: calligra-type1 [59k] Downloaded ./archive/calligra-type1.tar.xz, size equal, but md5sum differs; downloading again. ./tlpkg/installer/xz/xzdec.x86_64-linux: (stdin): File is corrupt tar: Unexpected EOF in archive tar: rmtlseek not stopped at a record boundary tar: Error is not recoverable: exiting now untar: untarring /home/lichao/ttt/temp/calligra-type1.tar failed (in /home/lichao/ttt/texmf-dist) untarring /home/lichao/ttt/temp/calligra-type1.tar failed, stopping install. Installation failed. Rerunning the installer will try to restart the installation. Or you can restart by running the installer with: install-tl --profile installation.profile [EXTRA-ARGS] ./install-tl: Could not write to install-tl.log, so flushing messages to stderr. Loading ./tlpkg/texlive.tlpdb Installing TeX Live 2012 from: . Platform: x86_64-linux = 'x86_64 with GNU/Linux' Distribution: inst (compressed) Directory for temporary files: /tmp Installer revision: 26794 Database revision: 26935 Installing [0001/2481, time/total: ??:??/??:??]: 12many [3k] Installing [0002/2481, time/total: 00:00/00:00]: 2up [4k] Installing [0003/2481, time/total: 00:00/00:00]: Asana-Math [457k] Installing [0004/2481, time/total: 00:00/00:00]: ESIEEcv [2k] Installing [0005/2481, time/total: 00:00/00:00]: FAQ-en [1k] ... Installing [0262/2481, time/total: 00:10/01:09]: c90 [2k] Installing [0263/2481, time/total: 00:10/01:09]: cachepic [5k] Installing [0264/2481, time/total: 00:10/01:09]: cachepic.x86_64-linux [1k] Installing [0265/2481, time/total: 00:10/01:09]: calctab [5k] Installing [0266/2481, time/total: 00:10/01:09]: calligra [42k] Installing [0267/2481, time/total: 00:10/01:09]: calligra-type1 [59k] Downloaded ./archive/calligra-type1.tar.xz, size equal, but md5sum differs; downloading again. untar: untarring /home/lichao/ttt/temp/calligra-type1.tar failed (in /home/lichao/ttt/texmf-dist) untarring /home/lichao/ttt/temp/calligra-type1.tar failed, stopping install. Installation failed. Rerunning the installer will try to restart the installation. Or you can restart by running the installer with: install-tl --profile installation.profile [EXTRA-ARGS] Segmentation fault (core dumped) I am sure that the iso is fine. I can open it with archive manager and all files are good. But after mounting it, even archive manager failed to open some files (which can be opened when the iso is opened in archive manager).

    Read the article

  • What is the best server or Ip address to use for prolonged testing?

    - by eldorel
    I usually run uptime/latency tests against (and from) two servers that we own at different sites and until recently I've used the google dns servers as a control group. However, I've realized there is a potential problem with monitoring latency over extended periods of time. Almost all of the major service providers are using ANYCAST. For short tests this doesn't matter, but I need to run a set of tests for at least a week to try and catch an intermittent problem, and a change in the anycast priority while trying to test latency will cause the latency values for that server to change accordingly. Since I'm submitting graphs of this data to the ISP, I need to avoid/account for as many variables as possible. Spikes in the data for only one of the tested servers will only cause headaches. So can anyone recommend servers that: are not using anycast are owned by an entity that has a good uptime reputation (so they can't claim that the problem is server-side) will respond to ICMP requests Have an available service that runs on TCP/UDP (http or dns preferably) Wont consider an automated request every 10 minutes to be abuse Are accessible from anywhere in the world Are not local to the isp ( consider this an investigation of a hostile party ) Thanks in advance. Edit: added #6 and #7 above. More info: I am attempting to demonstrate a network problem for an entire node of our local ISP's network. They are actively blaming the issue on the equipment installed at the customer sites (our backup site is one of these), and refuse to escalate the problem. (even though 2 of these businesses have ISP provided modems, and all of us have completely different routers/services running) I am already quite familiar with the need to test an isp controlled IP, but they are actively dropping all packets targeted at gateway ip addresses and are only passing traffic addressed beyond the gateways. So to demonstrate the issue, I am sending packets to other systems in the same node, systems one hop away from the affected node, and systems completely outside the network. Unfortunately, all of the systems I have currently are either administered directly by myself, or by people who are biased enough to assist me. I need to have several systems included in the trace/log/graphs that are 100% not in the control of either myself or the isp so that the graphs have a stable/unbiased control group. These requirements are straight from legal, I'm just trying to make sure that everything that could be argued to invalidate the data is already covered. In Summary: I need to be able to show tcp/udp/icmp as 3 separate data points, and I need to be able to show the connections inside the local node, from local node to another nearby node, from those 2 nodes to the internet, and through the internet to both verifiable servers and a control group that I have no control over whatsoever. Again, Google/opendns/yahoo/msn/facebook/etc all use anycast, which throws the numbers off every time the anycast caches expire, so I need suggestions of an IP or server that is available for this type of testing. I was hoping someone knew of a system run by someone such as ISC or ICANN, or perhaps even a .gov server (fcc or nsa maybe?) setup for this type of testing. Thanks again.

    Read the article

  • Lustre - issues with simple setup

    - by ethrbunny
    Issue: I'm trying to assess the (possible) use of Lustre for our group. To this end I've been trying to create a simple system to explore the nuances. I can't seem to get past the 'llmount.sh' test with any degree of success. What I've done: Each system (throwaway PCs with 70Gb HD, 2Gb RAM) is formatted with CentOS 6.2. I then update everything and install the Lustre kernel from downloads.whamcloud.com and add on the various (appropriate) lustre and e2fs RPM files. Systems are rebooted and tested with 'llmount.sh' (and then cleared with 'llmountcleanup.sh'). All is well to this point. First I create an MDS/MDT system via: /usr/sbin/mkfs.lustre --mgs --mdt --fsname=lustre --device-size=200000 --param sys.timeout=20 --mountfsoptions=errors=remount-ro,user_xattr,acl --param lov.stripesize=1048576 --param lov.stripecount=0 --param mdt.identity_upcall=/usr/sbin/l_getidentity --backfstype ldiskfs --reformat /tmp/lustre-mdt1 and then mkdir -p /mnt/mds1 mount -t lustre -o loop,user_xattr,acl /tmp/lustre-mdt1 /mnt/mds1 Next I take 3 systems and create a 2Gb loop mount via: /usr/sbin/mkfs.lustre --ost --fsname=lustre --device-size=200000 --param sys.timeout=20 --mgsnode=lustre_MDS0@tcp --backfstype ldiskfs --reformat /tmp/lustre-ost1 mkdir -p /mnt/ost1 mount -t lustre -o loop /tmp/lustre-ost1 /mnt/ost1 The logs on the MDT box show the OSS boxes connecting up. All appears ok. Last I create a client and attach to the MDT box: mkdir -p /mnt/lustre mount -t lustre -o user_xattr,acl,flock luster_MDS0@tcp:/lustre /mnt/lustre Again, the log on the MDT box shows the client connection. Appears to be successful. Here's where the issues (appear to) start. If I do a 'df -h' on the client it hangs after showing the system drives. If I attempt to create files (via 'dd') on the lustre mount the session hangs and the job can't be killed. Rebooting the client is the only solution. If I do a 'lctl dl' from the client it shows that only 2/3 OST boxes are found and 'UP'. [root@lfsclient0 etc]# lctl dl 0 UP mgc MGC10.127.24.42@tcp 282d249f-fcb2-b90f-8c4e-2f1415485410 5 1 UP lov lustre-clilov-ffff880037e4d400 00fc176e-3156-0490-44e1-da911be9f9df 4 2 UP lmv lustre-clilmv-ffff880037e4d400 00fc176e-3156-0490-44e1-da911be9f9df 4 3 UP mdc lustre-MDT0000-mdc-ffff880037e4d400 00fc176e-3156-0490-44e1-da911be9f9df 5 4 UP osc lustre-OST0000-osc-ffff880037e4d400 00fc176e-3156-0490-44e1-da911be9f9df 5 5 UP osc lustre-OST0003-osc-ffff880037e4d400 00fc176e-3156-0490-44e1-da911be9f9df 5 Doing a 'lfs df' from the client shows: [root@lfsclient0 etc]# lfs df UUID 1K-blocks Used Available Use% Mounted on lustre-MDT0000_UUID 149944 16900 123044 12% /mnt/lustre[MDT:0] OST0000 : inactive device OST0001 : Resource temporarily unavailable OST0002 : Resource temporarily unavailable lustre-OST0003_UUID 187464 24764 152636 14% /mnt/lustre[OST:3] filesystem summary: 187464 24764 152636 14% /mnt/lustre Given that each OSS box has a 2Gb (loop) mount I would expect to see this reflected in available size. There are no errors on the MDS/MDT box to indicate that multiple OSS/OST boxes have been lost. EDIT: each system has all other systems defined in /etc/hosts and entries in iptables to provide access. SO: I'm clearly making several mistakes. Any pointers as to where to start correcting them?

    Read the article

  • How to free up space on RHEL6 /boot safely?

    - by ams
    I am trying to do yum update on RHEL 6 box and I am getting this error message Transaction Check Error: installing package kernel-2.6.32-279.9.1.el6.x86_64 needs 10MB on the /boot filesystem installing package grub-1:0.97-77.el6.x86_64 needs 10MB on the /boot filesystem Error Summary ------------- Disk Requirements: At least 10MB more space needed on the /boot filesystem. My /boot has the following # ls -lah /boot total 74M dr-xr-xr-x. 5 root root 2.0K Jun 10 08:05 . drwxr-xr-x. 23 root root 4.0K Aug 27 03:08 .. -rw-r--r-- 1 root root 99K Apr 26 12:53 config-2.6.32-220.17.1.el6.x86_64 -rw-r--r-- 1 root root 99K Feb 10 2012 config-2.6.32-220.7.1.el6.x86_64 -rw-r--r--. 1 root root 99K Nov 9 2011 config-2.6.32-220.el6.x86_64 drwxr-xr-x. 3 root root 1.0K Mar 29 2012 efi drwxr-xr-x. 2 root root 1.0K Jun 10 07:53 grub -rw-r--r-- 1 root root 15M Jun 10 07:53 initramfs-2.6.32-220.17.1.el6.x86_64.img -rw-r--r-- 1 root root 15M Mar 29 2012 initramfs-2.6.32-220.7.1.el6.x86_64.img -rw-r--r--. 1 root root 15M Mar 29 2012 initramfs-2.6.32-220.el6.x86_64.img -rw------- 1 root root 3.4M Jun 10 08:06 initrd-2.6.32-220.17.1.el6.x86_64kdump.img -rw------- 1 root root 3.5M Jun 10 07:53 initrd-2.6.32-220.7.1.el6.x86_64kdump.img -rw------- 1 root root 3.4M Mar 29 2012 initrd-2.6.32-220.el6.x86_64kdump.img drwx------. 2 root root 12K Mar 29 2012 lost+found -rw-r--r-- 1 root root 168K Apr 26 12:55 symvers-2.6.32-220.17.1.el6.x86_64.gz -rw-r--r-- 1 root root 168K Feb 10 2012 symvers-2.6.32-220.7.1.el6.x86_64.gz -rw-r--r--. 1 root root 168K Nov 9 2011 symvers-2.6.32-220.el6.x86_64.gz -rw-r--r-- 1 root root 2.3M Apr 26 12:53 System.map-2.6.32-220.17.1.el6.x86_64 -rw-r--r-- 1 root root 2.3M Feb 10 2012 System.map-2.6.32-220.7.1.el6.x86_64 -rw-r--r--. 1 root root 2.3M Nov 9 2011 System.map-2.6.32-220.el6.x86_64 -rwxr-xr-x 1 root root 3.8M Apr 26 12:53 vmlinuz-2.6.32-220.17.1.el6.x86_64 -rw-r--r-- 1 root root 171 Apr 26 12:53 .vmlinuz-2.6.32-220.17.1.el6.x86_64.hmac -rwxr-xr-x 1 root root 3.8M Feb 10 2012 vmlinuz-2.6.32-220.7.1.el6.x86_64 -rw-r--r-- 1 root root 170 Feb 10 2012 .vmlinuz-2.6.32-220.7.1.el6.x86_64.hmac -rwxr-xr-x. 1 root root 3.8M Nov 9 2011 vmlinuz-2.6.32-220.el6.x86_64 -rw-r--r--. 1 root root 166 Nov 9 2011 .vmlinuz-2.6.32-220.el6.x86_64.hmac here is the disk usage on boot # du -h 13K ./lost+found 282K ./grub 247K ./efi/EFI/redhat 249K ./efi/EFI 251K ./efi 75M . Problem is that when I got this severer at my ISP I used their default image for RHEL 6 which only allocates 100MB for /boot clearly this is not enough. How can I get around this problem, is it safe to delete any of the above files some of them seem to be on the disk more than once? Is there some way of expand /boot without re-imaging the machine?

    Read the article

  • IPv6: Should I have private addresses?

    - by AlReece45
    Right now, we have a rack of servers. Every server right now has at least 2 IP addresses, one for the public interface, another for the private. The servers that have SSL websites on them have more IP addresses. We also have virtual servers, that are configured similarly. Private Network The private range is currently just used for backups and monitoring. Its a gigabit port, the interface usage does not usually get very high. There are other technologies we're considering using that would use this port: iSCSI (implementations usually recommends dedicating an interface to it, which would be yet another IP network), VPN to get access to the private range (something I'd rather avoid) dedicated database servers LDAP centralized configuration (like puppet) centralized logging We don't have any private addresses in our DNS records (only public addresses). For our servers to utilize the correct IP address for the right interface (and not hard code the IP address) probably requires setting up a private DNS server (So now we add 2 different dns entries to 2 different systems). Public Network Our public range has a variety of services include web, email, and ftp. There is a hardware firewall between our network and the "public" network. We have (relatively secure) method to instruct the firewall to open and close administrative access (web interfaces, ssh, etc) for our current IP address. With either solution discussed, the host-based firewalls will be configured as well. The public network currently runs at a dedicated 20Mbps link. There are a couple of legacy servers with fast-ethernet ports, but they are scheduled for decommissioning. All of the other production boxes have at least 2 Gigabit Ethernet ports. The more traffic-heavy servers have 4-6 available (none is using more than the 2 Gigabit ports right now). IPv6 I want to get an IPv6 prefix from our ISP. So at least every "server" has at least one IPv6 interface. We'll still need to keep the IPv4 addressees up and available for legacy clients (web servers and email at the very least). We have two IP networks right now. Adding the public IPv6 address would make it three. Just use IPv6? I'm thinking about just dumping the private IPv4 range and using the IPv6 range as the primary means of all communications. If an interface starts reaching its capacity, utilize the newly free interfaces to create a trunk. It has the advantage that if either the public or private traffic needs to exceed 1Gbps. The traffic for each interface is already analyzed on a regular basis to predict future bandwidth use. In the rare instances where bandwidth unexpected peaks: utilize QoS to ensure traffic (like our limited SSH access) is prioritized correctly so the problem can be corrected (if possible, our WAN is the bottleneck right now). It also has the advantage of not needing to make an entry for every private address. We may have private DNS (or just LDAP), but it'll be much more limited in scope with less entries to duplicate. Summary I'm trying to make this network as "simple" as possible. At the same time, I want to make sure its reliable, upgradeable, scalable, and (eventually) redundant. Having one IPv6 network, and a legacy IPv4 network seems to be the best solution to me. Regarding using assigned IPv6 addresses for both networks, sharing the available bandwidth on one (more trunked if needed): Are there any technical disadvantages (limitations, buffers, scalability)? Are there any other security considerations (asides from firewalls mentioned above) to consider? Are there regulations or other security requirements (like PCI-DSS) that this doesn't meet? Is there typical software for setting up a Linux network that doesn't have IPv6 support yet? (logging, ldap, puppet) Some other thing I didn't consider?

    Read the article

  • how to adjust the size of the root partition on live arch linux system (/dev/mapper/arch_root-image)

    - by leon
    Summary: I created a bootable usb drive with a live Bridge linux (ARCH based) on it. Everything works fine. The live system mounts a device called /dev/mapper/arch_root-image as its ext4 root partition (/ mount point). The problem is that I dont know how to control the size of this partition. This is not a Bridge specific issue (also happens with Archbang) Detail: My usb drive has a dos partition table with 2 ext2 partitions $ fdisk -l /dev/sdb Disk /dev/sdb: 29,8 GiB, 32006733824 bytes, 62513152 sectors Unités : secteur de 1 × 512 = 512 octets Taille de secteur (logique / physique) : 512 octets / 512 octets taille d'E/S (minimale / optimale) : 512 octets / 512 octets Type d'étiquette de disque : dos Identifiant de disque : 0x0007b7e2 Périphérique Amorçage Début Fin Blocs Id Système /dev/sdb1 * 2048 2002943 1000448 83 Linux /dev/sdb2 2002944 32258047 15127552 83 Linux sdb1 is approx 1GB and sdb2 is 14GB. The live system is on sdb1. sdb2 is empty. Now when I boot the live system, its filesystem looks like this: $ mount proc on /proc type proc (rw,nosuid,nodev,noexec,relatime) sys on /sys type sysfs (rw,nosuid,nodev,noexec,relatime) dev on /dev type devtmpfs (rw,nosuid,relatime,size=505272k,nr_inodes=126318,mode=755) run on /run type tmpfs (rw,nosuid,nodev,relatime,mode=755) /dev/sda1 on /run/archiso/bootmnt type ext2 (ro,relatime) cowspace on /run/archiso/cowspace type tmpfs (rw,relatime,size=772468k,mode=755) /dev/loop0 on /run/archiso/sfs/root-image type squashfs (ro,relatime) /dev/mapper/arch_root-image on / type ext4 (rw,relatime) securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime) tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev) devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000) tmpfs on /sys/fs/cgroup type tmpfs (rw,nosuid,nodev,noexec,mode=755) cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd) pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime) cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset) cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpuacct,cpu) cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory) cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices) cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer) cgroup on /sys/fs/cgroup/net_cls type cgroup (rw,nosuid,nodev,noexec,relatime,net_cls) cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio) mqueue on /dev/mqueue type mqueue (rw,relatime) debugfs on /sys/kernel/debug type debugfs (rw,relatime) hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime) configfs on /sys/kernel/config type configfs (rw,relatime) systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=36,pgrp=1,timeout=300,minproto=5,maxproto=5,direct) tmpfs on /tmp type tmpfs (rw) tmpfs on /etc/pacman.d/gnupg type tmpfs (rw,relatime,mode=755) As we can see, the root partition is from the device /dev/mapper/arch_root-image and my problem is that the live system recognizes it as a 3.9GB drive $ df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/arch_root-image 3.9G 1.9G 2.1G 48% / dev 494M 0 494M 0% /dev run 503M 23M 481M 5% /run /dev/sda1 962M 590M 324M 65% /run/archiso/bootmnt cowspace 755M 32M 723M 5% /run/archiso/cowspace /dev/loop0 520M 520M 0 100% /run/archiso/sfs/root-image tmpfs 503M 132K 503M 1% /dev/shm tmpfs 503M 0 503M 0% /sys/fs/cgroup tmpfs 503M 360K 503M 1% /tmp tmpfs 503M 896K 503M 1% /etc/pacman.d/gnupg My question is how is this size controled? I suspect this is related to the content of the aitab file which is part of the Bridge iso image: $ cat aitab # <img> <mnt> <arch> <sfs_comp> <fs_type> <fs_size> root-image / i686 xz ext4 50% I have read https://wiki.archlinux.org/index.php/archiso#aitab but found no clue

    Read the article

  • Distributed and/or Parallel SSIS processing

    - by Jeff
    Background: Our company hosts SaaS DSS applications, where clients provide us data Daily and/or Weekly, which we process & merge into their existing database. During business hours, load in the servers are pretty minimal as it's mostly users running simple pre-defined queries via the website, or running drill-through reports that mostly hit the SSAS OLAP cube. I manage the IT Operations Team, and so far this has presented an interesting "scaling" issue for us. For our daily-refreshed clients, the server is only "busy" for about 4-6 hrs at night. For our weekly-refresh clients, the server is only "busy" for maybe 8-10 hrs per week! We've done our best to use some simple methods of distributing the load by spreading the daily clients evenly among the servers such that we're not trying to process daily clients back-to-back over night. But long-term this scaling strategy creates two notable issues. First, it's going to consume a pretty immense amount of hardware that sits idle for large periods of time. Second, it takes significant Production Support over-head to basically "schedule" the ETL such that they don't over-lap, and move clients/schedules around if they out-grow the resources on a particular server or allocated time-slot. As the title would imply, one option we've tried is running multiple SSIS packages in parallel, but in most cases this has yielded VERY inconsistent results. The most common failures are DTExec, SQL, and SSAS fighting for physical memory and throwing out-of-memory errors, and ETLs running 3,4,5x longer than expected. So from my practical experience thus far, it seems like running multiple ETL packages on the same hardware isn't a good idea, but I can't be the first person that doesn't want to scale multiple ETLs around manual scheduling, and sequential processing. One option we've considered is virtualizing the servers, which obviously doesn't give you any additional resources, but moves the resource contention onto the hypervisor, which (from my experience) seems to manage simultaneous CPU/RAM/Disk I/O a little more gracefully than letting DTExec, SQL, and SSAS battle it out within Windows. Question to the forum: So my question to the forum is, are we missing something obvious here? Are there tools out there that can help manage running multiple SSIS packages on the same hardware? Would it be more "efficient" in terms of parallel execution if instead of running DTExec, SQL, and SSAS same machine (with every machine running that configuration), we run in pairs of three machines with SSIS running on one machine, SQL on another, and SSAS on a third? Obviously that would only make sense if we could process more than the three ETL we were able to process on the machine independently. Another option we've considered is completely re-architecting our SSIS package to have one "master" package for all clients that attempts to intelligently chose a server based off how "busy" it already is in terms of CPU/Memory/Disk utilization, but that would be a herculean effort, and seems like we're trying to reinvent something that you would think someone would sell (although I haven't had any luck finding it). So in summary, are we missing an obvious solution for this, and does anyone know if any tools (for free or for purchase, doesn't matter) that facilitate running multiple SSIS ETL packages in parallel and on multiple servers? (What I would call a "queue & node based" system, but that's not an official term). Ultimately VMWare's Distributed Resource Scheduler addresses this as you simply run a consistent number of clients per VM that you know will never conflict scheduleing-wise, then leave it up to VMWare to move the VMs around to balance out hardware usage. I'm definitely not against using VMWare to do this, but since we're a 100% Microsoft app stack, it seems like -someone- out there would have solved this problem at the application layer instead of the hypervisor layer by checking on resource utilization at the OS, SQL, SSAS levels. I'm open to ANY discussion on this, and remember no suggestion is too crazy or radical! :-) Right now, VMWare is the only option we've found to get away from "manually" balancing our resources, so any suggestions that leave us on a pure Microsoft stack would be great. Thanks guys, Jeff

    Read the article

  • USB connection is unstable with Nexus S 2.3.4 on AMD 64 running 64-bit Windows 7, but works with 32-bit Windows Vista

    - by Mike
    The USB connection is unstable with Nexus S (Android 2.3.4) on AMD 64 running 64-bit Windows 7, but it works with 32-bit Windows Vista. Problem Description: On the 64-bit Windows 7 machine my Nexus S appears to connect, but then it disconnects moments later. Neither accessing USB storage or loading an Android application package file (APK) using the Android Debug Bridge (ADB) work. On 32-bit Windows Vista using the same USB cable, USB storage works. I haven't tried the ADB on 32-bit Windows Vista. Reproduction steps for USB storage: (I have provided the reproduction steps for USB storage and not ADB, because if one isn't working, then the other isn't working either and the USB storage reproduction steps are shorter to document.) Connect the USB cable to the Nexus S and my Windows 7 machine. Effect: The "USB Mass Storage, USB Connected" dialog appears with the button "Turn on USB storage." Click "Turn on USB Storage" Effect: The "working circle" appears. A dialog briefly appears saying "USB storage in use," then it either returns me to Step 1 (now that I am running 2.3.4) or is replaced with the Nexus S's application homepage (while I was running 2.3.3). I'm not sure if the version matters, but I mention it for completeness. On the 32-bit Windows Vista machine the connection is stable. I am able to navigate through the Nexus S file system create, read, update, and delete files, etc. I haven't tried connecting with the ADB. Troubleshooting summary: Tried and failed: Uninstalling and reinstalling the Android USB drivers including removing the files. Uninstalling my custom software Pulling the Nexus S's battery Restarting the Nexus S Restarting 64-bit Windows 7 Changing USB ports on the 64-bit Windows 7 box Compared the dates and file size on the DLLs in my google-usb_driver\amd64 directory and the windows\System32 directory. They match. The sizes for the google-usb_driver\i386 directory do not match (expected). Turning off Debugging mode on the Nexus S does not resolve the problem. Searching Google. Tried and succeeded: Connecting to another machine (Windows Vista) using the same USB cable and Nexus S phone. Troubleshooting observations: I notice that uninstalling the device drivers and deleting the files, then reinstalling the drivers, then rebooting 64-bit Windows 7 then unplugging the Nexus S, then plugging it back in occasionally helps for a short amount of time (minutes to hours, not days). When it is working, I can both access the Nexus S's drive and load/test applications using the ADB. I have observed some wonky behavior in the Device Manager that I haven't tracked down. Sometimes the black Nexus S image appears in the list of devices. Sometimes the image displays as a computer with a green ISA card. Sometimes it neither appears on the top level of devices nor under “other devices,” but it does appear under "disk drives" as "Android UMS Composite USB Device." System configuration: The Nexus S is running Android OS 2.3.4's "Settings\about phone\System updates" indicates that it is up to date as of May 21st 2011. Both 32-bit Windows Vista and 64-bit Windows 7 are up to date. The Windows Vista system is running on an Intel 32-bit processor. Windows 7 is running on an AMD 64-bit processor. I have done Android development on both systems, but I usually develop on the 64-bit Windows 7 machine.

    Read the article

  • GooglePlayServicesNotAvailable: GooglePlayServices not available due to error 1

    - by Mathias Lin
    I'm on Galaxy S III with Android 4.0.4, Google Play installed. In my app I try to get a token from the Google Play services, as described on https://developers.google.com/android/google-play-services/authentication. Since it's all quite new (the Google pages were last updated this week), there's not much documentation to be found, especially about each specific error code. final String token = GoogleAuthUtil.getToken(this, "[email protected]", "scope"); gives me an exception: 09-30 11:24:36.075: ERROR/GoogleAuthUtil(11984): GooglePlayServices not available due to error 1 09-30 11:24:36.105: ERROR/AuthTokenCheck_(11984): Error 1 com.google.android.gms.auth.GooglePlayServicesAvailabilityException: GooglePlayServicesNotAvailable at com.google.android.gms.auth.GoogleAuthUtil.f(Unknown Source) at com.google.android.gms.auth.GoogleAuthUtil.getToken(Unknown Source) at com.google.android.gms.auth.GoogleAuthUtil.getToken(Unknown Source) at mobi.app.activity.AuthTokenCheck.getAndUseAuthTokenBlocking(AuthTokenCheck.java:148) at mobi.app.activity.AuthTokenCheck$1.doInBackground(AuthTokenCheck.java:61) at android.os.AsyncTask$2.call(AsyncTask.java:264) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:305) at java.util.concurrent.FutureTask.run(FutureTask.java:137) at android.os.AsyncTask$SerialExecutor$1.run(AsyncTask.java:208) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1076) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:569) at java.lang.Thread.run(Thread.java:856) https://developers.google.com/android/google-play-services/reference/com/google/android/gms/auth/package-summary tells me: GooglePlayServicesAvailabilityExceptions are special instances of UserRecoverableAuthExceptions which are thrown when the expected Google Play services app is not available for some reason. But what exactly does that mean? And how to resolve it? I've added the Google Play services extras in my SDK and the jar to my project, marked as 'exported'. I'm also wondering what the "Google Play services app" exactly is. Unfortunately it's all not very clearly described at https://developers.google.com/android/google-play-services/. The Google Play services component is delivered as an APK through the Google Play Store, so updates to Google Play services are not dependent on carrier or OEM system image updates. Newer devices will also have Google Play services as part of the device's system image, but updates are still pushed to these newer devices through the Google Play Store. Isn't "Google Play services" app the same as the "Google Play" app? Another question I have, due to lack of documentation: what is the scope parameter for? The documentation just says the following, but not defining what an 'authentication scope' exactly is: scope String representing the authentication scope.

    Read the article

  • Dynamic DataGrid columns in WPF DataGrid based on the underlying set of data (and their type)

    - by StatsMan
    Hello everyone, I've got kind of a conceptual question. I am in the process of wrapping some statistics classes I wrote into WPF. For that I have two DataGrid(-Views, currently in WinForms). In one DataGrid each row represents a column in the other. There I can set-up different variables (as in mathematical/statistical variables) with fields like "Header", "DataType", "ValidationBehaviour", "DisplayType". There I can also set-up how it should be displayed. Some Columns can automatically be set to ComboBoxColumns, some TextBoxColumns, and so on and so forth. So, now once I've set-up these Columns I can go to the other grid and enter my data. I may, for instance, have generated (in grid 1) one Column called "Annual Gross Salary" with input of numerical values. Another Column called "Education" with "0=NoEducation", "1=College Level", "3=Universitary" etc. These labels are displayed as text in the combobox and my statistics engine behind then selects the respective value (0-3) for calculations (i.e. ordinal, nominal variables). Sooo. In WinForms I could basically generate all the columns by hand in code and then add my data in the respective cells/rows. Now in WPF I thought that must be easy to realise. However, yesterday I got started with ICustomPropertyDescriptor which (maybe I was too thick) didn't give me the results I was looking for. Basically, I just need to be able to dynamically generate columns (and rows) with different Layout, Controls (ComboBox, simple Input, DateTimes) based on the data that I have. But I don't really know how to go about it? So here in summary: DataGrid 1 Purpose is to display columns that have been specified in DataGrid 2 In rows, the user can add any kind of data in the rows below the columns that is allowed as to the columns specifications DataGrid 2 Each row in this grid represents a column in DataGrid 1 Contains fields like Name/Header, DataType, Validation Behaviour, Default Value, Data Formatting, etc. Also contains a function to be able to set-up how it should be displayed. The user can select from, for instance, ComboBoxColumn (and also add the available options), DateTime, normal TextBox, CheckBox etc. After finishing adding a row it will automatically appear as a new column in DataGrid 1 I'd appreciate any kind of pointer into the right direction. Thanks very, very much in advance! :)

    Read the article

  • Troubleshooting .NET "Fatal Execution Engine Error"

    - by JYelton
    Summary: I periodically get a .NET Fatal Execution Engine Error on an application which I cannot seem to debug. The dialog that comes up only offers to close the program or send information about the error to Microsoft. I've tried looking at the more detailed information but I don't know how to make use of it. Error: The error is visible in Event Viewer under Applications and is as follows: .NET Runtime version 2.0.50727.3607 - Fatal Execution Engine Error (7A09795E) (80131506) The computer running it is Windows XP Professional SP 3. (Intel Core2Quad Q6600 2.4GHz w/ 2.0 GB of RAM) Other .NET-based projects that lack multi-threaded downloading (see below) seem to run just fine. Application: The application is written in C#/.NET 3.5 using VS2008, and installed via a setup project. The app is multi-threaded and downloads data from multiple web servers using System.Net.HttpWebRequest and its methods. I've determined that the .NET error has something to do with either threading or HttpWebRequest but I haven't been able to get any closer as this particular error seems impossible to debug. I've tried handling errors on many levels, including the following in Program.cs: // handle UI thread exceptions Application.ThreadException += new ThreadExceptionEventHandler(Application_ThreadException); // handle non-UI thread exceptions AppDomain.CurrentDomain.UnhandledException += new UnhandledExceptionEventHandler(CurrentDomain_UnhandledException); Application.EnableVisualStyles(); Application.SetCompatibleTextRenderingDefault(false); // force all windows forms errors to go through our handler Application.SetUnhandledExceptionMode(UnhandledExceptionMode.CatchException); More Notes and What I've Tried... Installed Visual Studio 2008 on the target machine and tried running in debug mode, but the error still occurs, with no hint as to where in source code it occurred. When running the program from its installed version (Release) the error occurs more frequently, usually within minutes of launching the application. When running the program in debug mode inside of VS2008, it can run for hours or days before generating the error. Reinstalled .NET 3.5 and made sure all updates are applied. Broke random cubicle objects in frustration. Rewritten parts of code that deal with threading and downloading in attempts to catch and log exceptions, though logging seemed to aggravate the problem (and never provided any data). Question: What steps can I take to troubleshoot or debug this kind of error? Memory dumps and the like seem to be the next step, but I'm not experienced at interpreting them. Perhaps there's something more I can do in the code to try and catch errors... It would be nice if the "Fatal Execution Engine Error" was more informative, but internet searches have only told me that it's a common error for a lot of .NET-related items.

    Read the article

  • WPF storyboard animation issue when using VisualBrush

    - by Flack
    Hey guys, I was playing around with storyboards, a flipping animation, and visual brushes. I have encountered an issue though. Below is the xaml and code-behind of a small sample I quickly put together to try to demonstrate the problem. When you first start the app, you are presented with a red square and two buttons. If you click the "Flip" button, the red square will "flip" over and a blue one will appear. In reality, all that is happening is that the scale of the width of the StackPanel that the red square is in is being decreased until it reaches zero and then the StackPanel where a blue square is, whose width is initially scaled to zero, has its width increased. If you click the "Flip" button a few times, the animation looks ok and smooth. Now, if you hit the "Reflection" button, a reflection of the red/blue buttons is added to their respective StackPanels. Hitting the "Flip" button now will still cause the flip animation but it is no longer a smooth animation. The StackPanels width often does not shrink to zero. The width shrinks somewhat but then just stops before being completely invisible. Then the other StackPanel appears as usual. The only thing that changed was adding the reflection, which is just a VisualBrush. Below is the code. Does anyone have any idea why the animations are different between the two cases (stalling in the second case)? Thanks. <Window xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xml:lang="en-US" xmlns:d="http://schemas.microsoft.com/expression/blend/2006" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" mc:Ignorable="d" x:Class="WpfFlipTest.Window1" x:Name="Window" Title="Window1" Width="214" Height="224"> <Window.Resources> <Storyboard x:Key="sbFlip"> <DoubleAnimationUsingKeyFrames BeginTime="00:00:00" Storyboard.TargetName="redStack" Storyboard.TargetProperty="(UIElement.RenderTransform).(ScaleTransform.ScaleX)"> <SplineDoubleKeyFrame KeyTime="00:00:00.4" Value="0"/> </DoubleAnimationUsingKeyFrames> <DoubleAnimationUsingKeyFrames BeginTime="00:00:00.4" Storyboard.TargetName="blueStack" Storyboard.TargetProperty="(UIElement.RenderTransform).(ScaleTransform.ScaleX)"> <SplineDoubleKeyFrame KeyTime="00:00:00.8" Value="1"/> </DoubleAnimationUsingKeyFrames> </Storyboard> <Storyboard x:Key="sbFlipBack"> <DoubleAnimationUsingKeyFrames BeginTime="00:00:00" Storyboard.TargetName="blueStack" Storyboard.TargetProperty="(UIElement.RenderTransform).(ScaleTransform.ScaleX)"> <SplineDoubleKeyFrame KeyTime="00:00:00.4" Value="0"/> </DoubleAnimationUsingKeyFrames> <DoubleAnimationUsingKeyFrames BeginTime="00:00:00.4" Storyboard.TargetName="redStack" Storyboard.TargetProperty="(UIElement.RenderTransform).(ScaleTransform.ScaleX)"> <SplineDoubleKeyFrame KeyTime="00:00:00.8" Value="1"/> </DoubleAnimationUsingKeyFrames> </Storyboard> </Window.Resources> <Grid x:Name="LayoutRoot" Background="Gray"> <Grid.RowDefinitions> <RowDefinition Height="Auto"/> <RowDefinition Height="Auto"/> </Grid.RowDefinitions> <Grid.ColumnDefinitions> <ColumnDefinition Width="Auto"/> <ColumnDefinition Width="Auto"/> </Grid.ColumnDefinitions> <StackPanel Name="redStack" Grid.Row="0" Grid.Column="0" RenderTransformOrigin="0.5,0.5"> <StackPanel.RenderTransform> <ScaleTransform/> </StackPanel.RenderTransform> <Border Name="redBorder" BorderBrush="Transparent" BorderThickness="4" Width="Auto" Height="Auto"> <Button Margin="0" Name="redButton" Height="75" Background="Red" Width="105" /> </Border> <Border Width="{Binding ElementName=redBorder, Path=ActualWidth}" Height="{Binding ElementName=redBorder, Path=ActualHeight}" Opacity="0.2" BorderBrush="Transparent" BorderThickness="4" Name="redRefelction" Visibility="Collapsed"> <Border.OpacityMask> <LinearGradientBrush StartPoint="0,0" EndPoint="0,1"> <LinearGradientBrush.GradientStops> <GradientStop Offset="0" Color="Black"/> <GradientStop Offset=".6" Color="Transparent"/> </LinearGradientBrush.GradientStops> </LinearGradientBrush> </Border.OpacityMask> <Border.Background> <VisualBrush Visual="{Binding ElementName=redButton}"> <VisualBrush.Transform> <ScaleTransform ScaleX="1" ScaleY="-1" CenterX="52.5" CenterY="37.5" /> </VisualBrush.Transform> </VisualBrush> </Border.Background> </Border> </StackPanel> <StackPanel Name="blueStack" Grid.Row="0" Grid.Column="0" RenderTransformOrigin="0.5,0.5"> <StackPanel.RenderTransform> <ScaleTransform ScaleX="0"/> </StackPanel.RenderTransform> <Border Name="blueBorder" BorderBrush="Transparent" BorderThickness="4" Width="Auto" Height="Auto"> <Button Grid.Row="0" Grid.Column="1" Margin="0" Width="105" Background="Blue" Name="blueButton" Height="75"/> </Border> <Border Width="{Binding ElementName=blueBorder, Path=ActualWidth}" Height="{Binding ElementName=blueBorder, Path=ActualHeight}" Opacity="0.2" BorderBrush="Transparent" BorderThickness="4" Name="blueRefelction" Visibility="Collapsed"> <Border.OpacityMask> <LinearGradientBrush StartPoint="0,0" EndPoint="0,1"> <LinearGradientBrush.GradientStops> <GradientStop Offset="0" Color="Black"/> <GradientStop Offset=".6" Color="Transparent"/> </LinearGradientBrush.GradientStops> </LinearGradientBrush> </Border.OpacityMask> <Border.Background> <VisualBrush Visual="{Binding ElementName=blueButton}"> <VisualBrush.Transform> <ScaleTransform ScaleX="1" ScaleY="-1" CenterX="52.5" CenterY="37.5" /> </VisualBrush.Transform> </VisualBrush> </Border.Background> </Border> </StackPanel> <Button Grid.Row="1" Click="FlipButton_Click" Height="19.45" HorizontalAlignment="Left" VerticalAlignment="Top" Width="76">Flip</Button> <Button Grid.Row="0" Grid.Column="1" Click="ReflectionButton_Click" Height="19.45" HorizontalAlignment="Left" VerticalAlignment="Top" Width="76">Reflection</Button> </Grid> </Window> Here are the button click handlers: using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Windows; using System.Windows.Controls; using System.Windows.Data; using System.Windows.Documents; using System.Windows.Input; using System.Windows.Media; using System.Windows.Media.Imaging; using System.Windows.Navigation; using System.Windows.Shapes; using System.Windows.Media.Animation; namespace WpfFlipTest { public partial class Window1 : Window { public Window1() { InitializeComponent(); } bool flipped = false; private void FlipButton_Click(object sender, RoutedEventArgs e) { Storyboard sbFlip = (Storyboard)Resources["sbFlip"]; Storyboard sbFlipBack = (Storyboard)Resources["sbFlipBack"]; if (flipped) { sbFlipBack.Begin(); flipped = false; } else { sbFlip.Begin(); flipped = true; } } bool reflection = false; private void ReflectionButton_Click(object sender, RoutedEventArgs e) { if (reflection) { reflection = false; redRefelction.Visibility = Visibility.Collapsed; blueRefelction.Visibility = Visibility.Collapsed; } else { reflection = true; redRefelction.Visibility = Visibility.Visible; blueRefelction.Visibility = Visibility.Visible; } } } } UPDATE: I have been testing this some more to try to find out what is causing the issue I am seeing and I believe I found what is causing the issue. Below I have pasted new xaml and code-behind. The new sample below is very similar to the original sample, with a few minor modifications. The xaml basically consists of two stack panels, each containing two borders. The second border in each stack panel is a visual brush (a reflection of the border above it). Now, when I click the "Flip" button, one stack panel gets its ScaleX reduced to zero, while the second stack panel, whose initial ScaleX is zero, gets its ScaleX increased to 1. This animation gives the illusion of flipping. There are also two textblocks which display the scale factor of each stack panel. I added those to try to diagnose my issue. The issue is (as described in the oringal post) that the flipping animation is not smooth. Every time I hit the flip button, the animation starts but whenever the ScaleX factor gets to around .14 to .16, the animation looks like it stalls and the stack panels never have there ScaleX reduced to zero, so they never totally disappear. Now, the strange thing is that if I change the Width/Height properties of the "frontBorder" and "backBorder" borders defined below to use explict values instead of Auto, such as Width=105 and Height=75 (to match the button in the border) everything works fine. The animation stutters the first two or three times I run it but after that the flips are smooth and flawless. (BTW, when an animation is run for the first time, is there something going on in the background, some sort of initialization, that causes it to be a little slow the first time?) Is it possible that the Auto Width/Height of the borders are causing the issue? I can reproduce it everytime but I am not sure why Auto Width/Height would be a problem. Below is the sample. Thanks for the help. <Window x:Class="FlipTest.Window1" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Title="Window1" Height="300" Width="300"> <Window.Resources> <Storyboard x:Key="sbFlip"> <DoubleAnimationUsingKeyFrames BeginTime="00:00:00" Storyboard.TargetName="front" Storyboard.TargetProperty="(UIElement.RenderTransform).(ScaleTransform.ScaleX)"> <SplineDoubleKeyFrame KeyTime="00:00:00.5" Value="0"/> </DoubleAnimationUsingKeyFrames> <DoubleAnimationUsingKeyFrames BeginTime="00:00:00.5" Storyboard.TargetName="back" Storyboard.TargetProperty="(UIElement.RenderTransform).(ScaleTransform.ScaleX)"> <SplineDoubleKeyFrame KeyTime="00:00:00.5" Value="1"/> </DoubleAnimationUsingKeyFrames> </Storyboard> <Storyboard x:Key="sbFlipBack"> <DoubleAnimationUsingKeyFrames BeginTime="00:00:00" Storyboard.TargetName="back" Storyboard.TargetProperty="(UIElement.RenderTransform).(ScaleTransform.ScaleX)"> <SplineDoubleKeyFrame KeyTime="00:00:00.5" Value="0"/> </DoubleAnimationUsingKeyFrames> <DoubleAnimationUsingKeyFrames BeginTime="00:00:00.5" Storyboard.TargetName="front" Storyboard.TargetProperty="(UIElement.RenderTransform).(ScaleTransform.ScaleX)"> <SplineDoubleKeyFrame KeyTime="00:00:00.5" Value="1"/> </DoubleAnimationUsingKeyFrames> </Storyboard> </Window.Resources> <Grid x:Name="LayoutRoot" Background="White" ShowGridLines="True"> <Grid.RowDefinitions> <RowDefinition Height="Auto"/> <RowDefinition Height="Auto"/> <RowDefinition Height="Auto"/> <RowDefinition Height="Auto"/> </Grid.RowDefinitions> <Grid.ColumnDefinitions> <ColumnDefinition Width="Auto"/> <ColumnDefinition Width="Auto"/> </Grid.ColumnDefinitions> <StackPanel x:Name="front" RenderTransformOrigin="0.5,0.5"> <StackPanel.RenderTransform> <ScaleTransform/> </StackPanel.RenderTransform> <Border Name="frontBorder" BorderBrush="Yellow" BorderThickness="2" Width="Auto" Height="Auto"> <Button Margin="0" Name="redButton" Height="75" Background="Red" Width="105" Click="FlipButton_Click"/> </Border> <Border Width="{Binding ElementName=frontBorder, Path=ActualWidth}" Height="{Binding ElementName=frontBorder, Path=ActualHeight}" Opacity="0.2" BorderBrush="Transparent"> <Border.OpacityMask> <LinearGradientBrush StartPoint="0,0" EndPoint="0,1"> <LinearGradientBrush.GradientStops> <GradientStop Offset="0" Color="Black"/> <GradientStop Offset=".6" Color="Transparent"/> </LinearGradientBrush.GradientStops> </LinearGradientBrush> </Border.OpacityMask> <Border.Background> <VisualBrush Visual="{Binding ElementName=frontBorder}"> <VisualBrush.Transform> <ScaleTransform ScaleX="1" ScaleY="-1" CenterX="52.5" CenterY="37.5" /> </VisualBrush.Transform> </VisualBrush> </Border.Background> </Border> </StackPanel> <StackPanel x:Name="back" RenderTransformOrigin="0.5,0.5"> <StackPanel.RenderTransform> <ScaleTransform ScaleX="0"/> </StackPanel.RenderTransform> <Border Name="backBorder" BorderBrush="Yellow" BorderThickness="2" Width="Auto" Height="Auto"> <Button Margin="0" Width="105" Background="Blue" Name="blueButton" Height="75" Click="FlipButton_Click"/> </Border> <Border Width="{Binding ElementName=backBorder, Path=ActualWidth}" Height="{Binding ElementName=backBorder, Path=ActualHeight}" Opacity="0.2" BorderBrush="Transparent"> <Border.OpacityMask> <LinearGradientBrush StartPoint="0,0" EndPoint="0,1"> <LinearGradientBrush.GradientStops> <GradientStop Offset="0" Color="Black"/> <GradientStop Offset=".6" Color="Transparent"/> </LinearGradientBrush.GradientStops> </LinearGradientBrush> </Border.OpacityMask> <Border.Background> <VisualBrush Visual="{Binding ElementName=backBorder}"> <VisualBrush.Transform> <ScaleTransform ScaleX="1" ScaleY="-1" CenterX="52.5" CenterY="37.5" /> </VisualBrush.Transform> </VisualBrush> </Border.Background> </Border> </StackPanel> <Button Grid.Row="1" Click="FlipButton_Click" Height="19.45" HorizontalAlignment="Left" VerticalAlignment="Top" Width="76">Flip</Button> <TextBlock Grid.Row="2" Grid.Column="0" Foreground="DarkRed" Height="19.45" HorizontalAlignment="Left" VerticalAlignment="Top" Width="76" Text="{Binding ElementName=front, Path=(UIElement.RenderTransform).(ScaleTransform.ScaleX)}"/> <TextBlock Grid.Row="3" Grid.Column="0" Foreground="DarkBlue" Height="19.45" HorizontalAlignment="Left" VerticalAlignment="Top" Width="76" Text="{Binding ElementName=back, Path=(UIElement.RenderTransform).(ScaleTransform.ScaleX)}"/> </Grid> </Window> Code-behind: using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Windows; using System.Windows.Controls; using System.Windows.Data; using System.Windows.Documents; using System.Windows.Input; using System.Windows.Media; using System.Windows.Media.Imaging; using System.Windows.Navigation; using System.Windows.Shapes; using System.Windows.Media.Animation; namespace FlipTest { /// <summary> /// Interaction logic for Window1.xaml /// </summary> public partial class Window1 : Window { public Window1() { InitializeComponent(); } bool flipped = false; private void FlipButton_Click(object sender, RoutedEventArgs e) { Storyboard sbFlip = (Storyboard)Resources["sbFlip"]; Storyboard sbFlipBack = (Storyboard)Resources["sbFlipBack"]; if (flipped) { sbFlipBack.Begin(); flipped = false; } else { sbFlip.Begin(); flipped = true; } } } }

    Read the article

  • Sunrise / set calculations

    - by dassouki
    I'm trying to calculate the sunset / rise times using python based on the link provided below. My results done through excel and python do not match the real values. Any ideas on what I could be doing wrong? My Excel sheet can be found under .. http://transpotools.com/sun_time.xls # Created on 2010-03-28 # @author: dassouki # @source: [http://williams.best.vwh.net/sunrise_sunset_algorithm.htm][2] # @summary: this is based on the Nautical Almanac Office, United States Naval # Observatory. import math, sys class TimeOfDay(object): def calculate_time(self, in_day, in_month, in_year, lat, long, is_rise, utc_time_zone): # is_rise is a bool when it's true it indicates rise, # and if it's false it indicates setting time #set Zenith zenith = 96 # offical = 90 degrees 50' # civil = 96 degrees # nautical = 102 degrees # astronomical = 108 degrees #1- calculate the day of year n1 = math.floor( 275 * in_month / 9 ) n2 = math.floor( ( in_month + 9 ) / 12 ) n3 = ( 1 + math.floor( in_year - 4 * math.floor( in_year / 4 ) + 2 ) / 3 ) new_day = n1 - ( n2 * n3 ) + in_day - 30 print "new_day ", new_day #2- calculate rising / setting time if is_rise: rise_or_set_time = new_day + ( ( 6 - ( long / 15 ) ) / 24 ) else: rise_or_set_time = new_day + ( ( 18 - ( long/ 15 ) ) / 24 ) print "rise / set", rise_or_set_time #3- calculate sun mean anamoly sun_mean_anomaly = ( 0.9856 * rise_or_set_time ) - 3.289 print "sun mean anomaly", sun_mean_anomaly #4 calculate true longitude true_long = ( sun_mean_anomaly + ( 1.916 * math.sin( math.radians( sun_mean_anomaly ) ) ) + ( 0.020 * math.sin( 2 * math.radians( sun_mean_anomaly ) ) ) + 282.634 ) print "true long ", true_long # make sure true_long is within 0, 360 if true_long < 0: true_long = true_long + 360 elif true_long > 360: true_long = true_long - 360 else: true_long print "true long (360 if) ", true_long #5 calculate s_r_a (sun_right_ascenstion) s_r_a = math.degrees( math.atan( 0.91764 * math.tan( math.radians( true_long ) ) ) ) print "s_r_a is ", s_r_a #make sure it's between 0 and 360 if s_r_a < 0: s_r_a = s_r_a + 360 elif true_long > 360: s_r_a = s_r_a - 360 else: s_r_a print "s_r_a (modified) is ", s_r_a # s_r_a has to be in the same Quadrant as true_long true_long_quad = ( math.floor( true_long / 90 ) ) * 90 s_r_a_quad = ( math.floor( s_r_a / 90 ) ) * 90 s_r_a = s_r_a + ( true_long_quad - s_r_a_quad ) print "s_r_a (quadrant) is ", s_r_a # convert s_r_a to hours s_r_a = s_r_a / 15 print "s_r_a (to hours) is ", s_r_a #6- calculate sun diclanation in terms of cos and sin sin_declanation = 0.39782 * math.sin( math.radians ( true_long ) ) cos_declanation = math.cos( math.asin( sin_declanation ) ) print " sin/cos declanations ", sin_declanation, ", ", cos_declanation # sun local hour cos_hour = ( math.cos( math.radians( zenith ) ) - ( sin_declanation * math.sin( math.radians ( lat ) ) ) / ( cos_declanation * math.cos( math.radians ( lat ) ) ) ) print "cos_hour ", cos_hour # extreme north / south if cos_hour > 1: print "Sun Never Rises at this location on this date, exiting" # sys.exit() elif cos_hour < -1: print "Sun Never Sets at this location on this date, exiting" # sys.exit() print "cos_hour (2)", cos_hour #7- sun/set local time calculations if is_rise: sun_local_hour = ( 360 - math.degrees(math.acos( cos_hour ) ) ) / 15 else: sun_local_hour = math.degrees( math.acos( cos_hour ) ) / 15 print "sun local hour ", sun_local_hour sun_event_time = sun_local_hour + s_r_a - ( 0.06571 * rise_or_set_time ) - 6.622 print "sun event time ", sun_event_time #final result time_in_utc = sun_event_time - ( long / 15 ) + utc_time_zone return time_in_utc #test through main def main(): print "Time of day App " # test: fredericton, NB # answer: 7:34 am long = 66.6 lat = -45.9 utc_time = -4 d = 3 m = 3 y = 2010 is_rise = True tod = TimeOfDay() print "TOD is ", tod.calculate_time(d, m, y, lat, long, is_rise, utc_time) if __name__ == "__main__": main()

    Read the article

  • SSRS Report from Oracle DB - Use stored procedure

    - by Emtucifor
    I am developing a report in Sql Server Reporting Services 2005, connecting to an Oracle 11g database. As you post replies perhaps it will help to know that I'm skilled in MSSQL Server and inexperienced in Oracle. I have multiple nested subreports and need to use summary data in outer reports and the same data but in detail in the inner reports. In order to spare the DB server from multiple executions, I thought to populate some temp tables at the beginning and then query just them the multiple times in the report and the subreports. In SSRS, Datasets are evidently executed in the order they appear in the RDL file. And you can have a dataset that doesn't return a rowset. So I created a stored procedure to populate my four temp tables and made this the first Dataset in my report. This SP works when I run it from SQLDeveloper and I can query the data from the temp tables. However, this didn't appear to work out because SSRS was apparently not reusing the same session, so even though the global temporary tables were created with ON COMMIT PRESERVE ROWS my Datasets were empty. I switched to using "real" tables and am now passing in an additional parameter, a GUID in string form, uniquely generated on each new execution, that is part of the primary key of each table, so I can get back just the rows for this execution. Running this from Sql Developer works fine, example: DECLARE ActivityCode varchar2(15) := '1208-0916 '; ExecutionID varchar2(32) := SYS_GUID(); BEGIN CIPProjectBudget (ActivityCode, ExecutionID); END; Never mind that in this example I don't know the GUID, this simply proves it works because rows are inserted to my four tables. But in the SSRS report, I'm still getting no rows in my Datasets and SQL Developer confirms no rows are being inserted. So I'm thinking along the lines of: Oracle uses implicit transactions and my changes aren't getting committed? Even though I can prove that the non-rowset returning SP is executing (because if I leave out the parameter mapping it complains at report rendering time about not having enough parameters) perhaps it's not really executing. Somehow. Wrong execution order isn't the problem or rows would appear in the tables, and they aren't. I'm interested in any ideas about how to accomplish this (especially the part about not running the main queries multiple times). I'll redesign my whole report. I'll stop using a stored procedure. Suggest anything you like! I just need help getting this working and I am stuck. If you want more details, in my SSRS report I have a List object (it's a container that repeats once for each row in a Dataset) that has some header values and then contains a subreport. Eventually, there will be four total reports: one main report, with three nested subreports. Each subreport will be in a List on the parent report.

    Read the article

  • ASP.NET MVC 2 JQuery POST not displaying the model state errors

    - by Oshan
    Hello, I have been using asp.net mvc for a bit (but I'm still a beginer). I want to have the ability to update two views as a result of a jquery postback. Basically I have a list and a details view. The details view is presented using a jquery popup (using jquery-UI popup). I only want to update the list if the details save is successful (i.e. there are no validation errors on the details view). However, if there are any validation errros in the details view, I want to update the details view so that the user sees the validation errors. so I thought in my controller, I return a JsonResult instead of a View. [HttpPost] public ActionResult SavePersonInfo(Person p) { if(ModelState.Valid) { return View("PersonList"); } return Json({Error = true, View = PartialView("PersonDetails", p)}); } As you can see if there are no errors I return the person list view, but if there are any validation errors, I have return the details view. The reason that I'm returning a JsonResult is I need to tell my view there is an error so that the view (jquery) knows which section to update (as in whether to update the person list 'div' or the popup dialog 'div'). So, in my view, the jquery is as follows (please assume that there is a form for entering in the person details and "SubmitPersonForm();" function is called upon clicking on the "Save" button): <script type="text/javascript> $('#btnSave').click(function (event) { onBegin(); $.ajax( { type: "POST", url: "/Person/Save", data: $('form').serialize(), success: function (result) { if(result.Error) { $('#dvDetails').html($(result).View)); } else { $('#dvPersonList').html($result); } } }); }); </script> So the problem that I have now, is that when there is a validation error, I do see the correct, 'div' being updated, but I lose the asp.net mvc validation messages. I do not see any validation errors in red, as if ASP.NET MVC is completely ignored them. However, my ModelState does have those errros, just not displayed in the details view. I do have valication summary and Html.ValidationFor(m = ...) statements put in my details view. Could someone tell me why I'm not seeing the validation errors? although I'm using a JSonResult, I do use the right property which is a valid view when I render the 'dvDetails'. Am I doing something that I'm not suppose to in asp.net mvc? Btw I'm using asp.net mvc2 RC with Visual Studio 2010 RC. Thank you.

    Read the article

  • One-to-one Mapping issue with NHibernate/Fluent: Foreign Key not updateing

    - by Trevor
    Summary: Parent and Child class. One to one relationship between the Parent and Child. Parent has a FK property which references the primary key of the Child. Code as follows: public class NHTestParent { public virtual Guid NHTestParentId { get; set; } public virtual Guid ChildId { get { return ChildRef.NHTestChildId; } set { } } public virtual string ParentName { get; set; } protected NHTestChild _childRef; public virtual NHTestChild ChildRef { get { if (_childRef == null) _childRef = new NHTestChild(); return _childRef; } set { _childRef = value; } } } public class NHTestChild { public virtual Guid NHTestChildId { get; set; } public virtual string ChildName { get; set; } } With the following Fluent mappings: Parent Mapping Id(x => x.NHTestParentId); Map(x => x.ParentName); Map(x => x.ChildId); References(x => x.ChildRef, "ChildId").Cascade.All(); Child Mapping: Id(x => x.NHTestChildId); Map(x => x.ChildName); If I do something like (pseudo code) ... HTestParent parent = new NHTestParent(); parent.ParentName = "Parent 1"; parent.ChildRef.ChildName = "Child 1"; nhibernateSession.SaveOrUpdate(aParent); Commit; ... I get an error: "Invalid index 3 for this SqlParameterCollection with Count=3" If I change the parent 'References' line as follows (i.e. provide the name of the child property I'm pointing at): References(x => x.ChildRef, "ChildId").PropertyRef("NHTestChildId").Cascade.All(); I get the error: "Unable to resolve property: NHTestChildId" So, I tried the 'HasOne()' reference setting, as follows: HasOne<NHTestChild>(x => x.ChildRef).ForeignKey("ChildId").Cascade.All().Fetch.Join(); This results in the save working, but the load fails to find the child. If I inspect the SQL Nhibernate produces I can see that NHibernate is assuming the Primary key of the parent is the link to the child (i.e. load join condition is "parent.NHTestParentId = child.NHTestChildId). The 'ForeignKey' specified appears to be ignored. I can set any value and no error occurs - the join just always fails and no child is returned. I've tried a number of slight variations on the above. It seems like it should be a simple thing to achieve. Any ideas?

    Read the article

  • Timeout Expired error Using LINQ

    - by Refracted Paladin
    I am going to sum up my problem first and then offer massive details and what I have already tried. Summary: I have an internal winform app that uses Linq 2 Sql to connect to a local SQL Express database. Each user has there own DB and the DB stay in sync through Merge Replication with a Central DB. All DB's are SQL 2005(sp2or3). We have been using this app for over 5 months now but recently our users are getting a Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding. Detailed: The strange part is they get that in two differnt locations(2 differnt LINQ Methods) and only the first time they fire in a given time period(~5mins). One LINQ method is pulling all records that match a FK ID and then Manipulating them to form a Heirarchy View for a TreeView. The second is pulling all records that match a FK ID and dumping them into a DataGridView. The only things I can find in common with the 2 are that the first IS an IEnumerable and the second converts itself from IQueryable - IEnumerable - DataTable... I looked at the query's in Profiler and they 'seemed' normal. They are not very complicated querys. They are only pulling back 10 - 90 records, from one table. Any thoughts, suggestions, hints whatever would be greatly appreciated. I am at my wit's end on this.... public IList<CaseNoteTreeItem> GetTreeViewDataAsList(int personID) { var myContext = MatrixDataContext.Create(); var caseNotesTree = from cn in myContext.tblCaseNotes where cn.PersonID == personID orderby cn.ContactDate descending, cn.InsertDate descending select new CaseNoteTreeItem { CaseNoteID = cn.CaseNoteID, NoteContactDate = Convert.ToDateTime(cn.ContactDate). ToShortDateString(), ParentNoteID = cn.ParentNote, InsertUser = cn.InsertUser, ContactDetailsPreview = cn.ContactDetails.Substring(0, 75) }; return caseNotesTree.ToList<CaseNoteTreeItem>(); } AND THIS ONE public static DataTable GetAllCNotes(int personID) { using (var context = MatrixDataContext.Create()) { var caseNotes = from cn in context.tblCaseNotes where cn.PersonID == personID orderby cn.ContactDate select new { cn.ContactDate, cn.ContactDetails, cn.TimeSpentUnits, cn.IsCaseLog, cn.IsPreEnrollment, cn.PresentAtContact, cn.InsertDate, cn.InsertUser, cn.CaseNoteID, cn.ParentNote }; return caseNotes.ToList().CopyLinqToDataTable(); } }

    Read the article

  • When constructing a Bitmap with Bitmap.FromHbitmap(), how soon can the original bitmap handle be del

    - by GBegen
    From the documentation of Image.FromHbitmap() at http://msdn.microsoft.com/en-us/library/k061we7x%28VS.80%29.aspx : The FromHbitmap method makes a copy of the GDI bitmap; so you can release the incoming GDI bitmap using the GDIDeleteObject method immediately after creating the new Image. This pretty explicitly states that the bitmap handle can be immediately deleted with DeleteObject as soon as the Bitmap instance is created. Looking at the implementation of Image.FromHbitmap() with Reflector, however, shows that it is a pretty thin wrapper around the GDI+ function, GdipCreateBitmapFromHBITMAP(). There is pretty scant documentation on the GDI+ flat functions, but http://msdn.microsoft.com/en-us/library/ms533971%28VS.85%29.aspx says that GdipCreateBitmapFromHBITMAP() corresponds to the Bitmap::Bitmap() constructor that takes an HBITMAP and an HPALETTE as parameters. The documentation for this version of the Bitmap::Bitmap() constructor at http://msdn.microsoft.com/en-us/library/ms536314%28VS.85%29.aspx has this to say: You are responsible for deleting the GDI bitmap and the GDI palette. However, you should not delete the GDI bitmap or the GDI palette until after the GDI+ Bitmap::Bitmap object is deleted or goes out of scope. Do not pass to the GDI+ Bitmap::Bitmap constructor a GDI bitmap or a GDI palette that is currently (or was previously) selected into a device context. Furthermore, one can see the source code for the C++ portion of GDI+ in GdiPlusBitmap.h that the Bitmap::Bitmap() constructor in question is itself a wrapper for the GdipCreateBitmapFromHBITMAP() function from the flat API: inline Bitmap::Bitmap( IN HBITMAP hbm, IN HPALETTE hpal ) { GpBitmap *bitmap = NULL; lastResult = DllExports::GdipCreateBitmapFromHBITMAP(hbm, hpal, &bitmap); SetNativeImage(bitmap); } What I can't easily see is the implementation of GdipCreateBitmapFromHBITMAP() that is the core of this functionality, but the two remarks in the documentation seem to be contradictory. The .Net documentation says I can delete the bitmap handle immediately, and the GDI+ documentation says the bitmap handle must be kept until the wrapping object is deleted, but both are based on the same GDI+ function. Furthermore, the GDI+ documentation warns against using a source HBITMAP that is currently or previously selected into a device context. While I can understand why the bitmap should not be selected into a device context currently, I do not understand why there is a warning against using a bitmap that was previously selected into a device context. That would seem to prevent use of GDI+ bitmaps that had been created in memory using standard GDI. So, in summary: Does the original bitmap handle need to be kept around until the .Net Bitmap object is disposed? Does the GDI+ function, GdipCreateBitmapFromHBITMAP(), make a copy of the source bitmap or merely hold onto the handle to the original? Why should I not use an HBITMAP that was previously selected into a device context?

    Read the article

  • Asynchronous callback for network in Objective-C Iphone

    - by vodkhang
    I am working with network request - response in Objective-C. There is something with asynchronous model that I don't understand. In summary, I have a view that will show my statuses from 2 social networks: Twitter and Facebook. When I clicked refresh, it will call a model manager. That model manager will call 2 service helpers to request for latest items. When 2 service helpers receive data, it will pass back to model manager and this model will add all data into a sorted array. What I don't understand here is that : when response from social networks come back, how many threads will handle the response. From my understanding about multithreading and networking (in Java), there must have 2 threads handle 2 responses and those 2 threads will execute the code to add the responses to the array. So, it can have race condition and the program can go wrong right? Is it the correct working model of iphone objective-C? Or they do it in a different way that it will never have race condition and we don't have to care about locking, synchronize? Here is my example code: ModelManager.m - (void)updateMyItems:(NSArray *)items { self.helpers = [self authenticatedHelpersForAction:NCHelperActionGetMyItems]; for (id<NCHelper> helper in self.helpers) { [helper updateMyItems:items]; // NETWORK request here } } - (void)helper:(id <NCHelper>)helper didReturnItems:(NSArray *)items { [self helperDidFinishGettingMyItems:items callback:@selector(model:didGetMyItems:)]; break; } } // some private attributes int *_currentSocialNetworkItemsCount = 0; // to count the number of items of a social network - (void)helperDidFinishGettingMyItems:(NSArray *)items { for (Item *item in items) { _currentSocialNetworkItemsCount ++; } NSLog(@"count: %d", _currentSocialNetworkItemsCount); _currentSocialNetworkItemsCount = 0; } I want to ask if there is a case that the method helperDidFinishGettingMyItems is called concurrently. That means, for example, faceboook returns 10 items, twitter returns 10 items, will the output of count will ever be larger than 10? And if there is only one single thread, how can the thread finishes parsing 1 response and jump to the other response because, IMO, thread is only executed sequently, block of code by block of code

    Read the article

  • Perl DBD::DB2 installation failed

    - by prabhu
    Hi, We dont have root access in our local machine. I installed DBI package first and then installed DBD package. I got the below error first, In file included from DB2.h:22, from DB2.xs:7: dbdimp.h:10:22: dbivport.h: No such file or directory Then I included the DBI path in the Makefile and then I am getting the below error. DB2.xs: In function `XS_DBD__DB2__db_disconnect': DB2.xs:128: error: structure has no member named `_old_cached_kids' DB2.xs:129: error: structure has no member named `_old_cached_kids' DB2.xs:130: error: structure has no member named `_old_cached_kids' DB2.xs: In function `XS_DBD__DB2__db_DESTROY': DB2.xs:192: error: structure has no member named `_old_cached_kids' DB2.xs:193: error: structure has no member named `_old_cached_kids' DB2.xs:194: error: structure has no member named `_old_cached_kids' The versions I am trying to install are DBI-1.610_90.tar.gz DBD-DB2-1.78.tar.gz I am using perl Makefile.PL PREFIX=/home/prabhu/perl_pm/lib The output for perl -V is as follows: Summary of my perl5 (revision 5 version 8 subversion 5) configuration: Platform: osname=linux, osvers=2.4.21-27.0.2.elsmp, archname=i386-linux-thread-multi uname='linux decompose.build.redhat.com 2.4.21-27.0.2.elsmp #1 smp wed jan 12 23:35:44 est 2005 i686 i686 i386 gnulinux ' config_args='-des -Doptimize=-O2 -g -pipe -m32 -march=i386 -mtune=pentium4 -Dversion=5.8.5 -Dmyhostname=localhost -Dperladmin=root@localhost -Dcc=gcc -Dcf_by=Red Hat, Inc. -Dinstallprefix=/usr -Dprefix=/usr -Darchname=i386-linux -Dvendorprefix=/usr -Dsiteprefix=/usr -Duseshrplib -Dusethreads -Duseithreads -Duselargefiles -Dd_dosuid -Dd_semctl_semun -Di_db -Ui_ndbm -Di_gdbm -Di_shadow -Di_syslog -Dman3ext=3pm -Duseperlio -Dinstallusrbinperl -Ubincompat5005 -Uversiononly -Dpager=/usr/bin/less -isr -Dinc_version_list=5.8.4 5.8.3 5.8.2 5.8.1 5.8.0' hint=recommended, useposix=true, d_sigaction=define usethreads=define use5005threads=undef useithreads=define usemultiplicity=define useperlio=define d_sfio=undef uselargefiles=define usesocks=undef use64bitint=undef use64bitall=undef uselongdouble=undef usemymalloc=n, bincompat5005=undef Compiler: cc='gcc', ccflags ='-D_REENTRANT -D_GNU_SOURCE -DDEBUGGING -fno-strict-aliasing -pipe -I/usr/local/include -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -I/usr/include/gdbm', optimize='-O2 -g -pipe -m32 -march=i386 -mtune=pentium4', cppflags='-D_REENTRANT -D_GNU_SOURCE -DDEBUGGING -fno-strict-aliasing -pipe -I/usr/local/include -I/usr/include/gdbm' ccversion='', gccversion='3.4.4 20050721 (Red Hat 3.4.4-2)', gccosandvers='' intsize=4, longsize=4, ptrsize=4, doublesize=8, byteorder=1234 d_longlong=define, longlongsize=8, d_longdbl=define, longdblsize=12 ivtype='long', ivsize=4, nvtype='double', nvsize=8, Off_t='off_t', lseeksize=8 alignbytes=4, prototype=define Linker and Libraries: ld='gcc', ldflags =' -L/usr/local/lib' libpth=/usr/local/lib /lib /usr/lib libs=-lnsl -lgdbm -ldb -ldl -lm -lcrypt -lutil -lpthread -lc perllibs=-lnsl -ldl -lm -lcrypt -lutil -lpthread -lc libc=/lib/libc-2.3.4.so, so=so, useshrplib=true, libperl=libperl.so gnulibc_version='2.3.4' Dynamic Linking: dlsrc=dl_dlopen.xs, dlext=so, d_dlsymun=undef, ccdlflags='-Wl,-E -Wl,-rpath,/usr/lib/perl5/5.8.5/i386-linux-thread-multi/CORE' cccdlflags='-fPIC', lddlflags='-shared -L/usr/local/lib' Characteristics of this binary (from libperl): Compile-time options: DEBUGGING MULTIPLICITY USE_ITHREADS USE_LARGE_FILES PERL_IMPLICIT_CONTEXT Built under linux Compiled at Aug 2 2005 04:48:47 %ENV: PERL5LIB=":/opt/india/dev/perl/XML-XPath-1.13/lib/perl5:/opt/india/dev/perl/XML-XPath- 1.13/lib/perl5/site_perl:/opt/india/dev/perl/XML-XPath-1.13/lib/perl5:/opt/india/dev/perl/XML-XPath-1.13/lib/perl5/site_perl" Could anyone help me to resolve this issue? Appreciate help in advance. Prabhu

    Read the article

  • C# SerialPort - Problems mixing ports with different baud rates.

    - by GrandAdmiral
    Greetings, I have two devices that I would like to connect over a serial interface, but they have incompatible connections. To get around this problem, I connected them both to my PC and I'm working on a C# program that will route traffic on COM port X to COM port Y and vice versa. The program connects to two COM ports. In the data received event handler, I read in incoming data and write it to the other COM port. To do this, I have the following code: private void HandleDataReceived(SerialPort inPort, SerialPort outPort) { byte[] data = new byte[1]; while (inPort.BytesToRead > 0) { // Read the data data[0] = (byte)inPort.ReadByte(); // Write the data if (outPort.IsOpen) { outPort.Write(data, 0, 1); } } } That code worked fine as long as the outgoing COM port operated at a higher baud rate than the incoming COM port. If the incoming COM port was faster than the outgoing COM port, I started missing data. I had to correct the code like this: private void HandleDataReceived(SerialPort inPort, SerialPort outPort) { byte[] data = new byte[1]; while (inPort.BytesToRead > 0) { // Read the data data[0] = (byte)inPort.ReadByte(); // Write the data if (outPort.IsOpen) { outPort.Write(data, 0, 1); while (outPort.BytesToWrite > 0); //<-- Change to fix problem } } } I don't understand why I need that fix. I'm new to C# (this is my first program), so I'm wondering if there is something I am missing. The SerialPort defaults to a 2048 byte write buffer and my commands are less than ten bytes. The write buffer should have the ability to buffer the data until it can be written to a slower COM port. In summary, I'm receiving data on COM X and writing the data to COM Y. COM X is connected at a faster baud rate than COM Y. Why doesn't the buffering in the write buffer handle this difference? Why does it seem that I need to wait for the write buffer to drain to avoid losing data? Thanks!

    Read the article

< Previous Page | 116 117 118 119 120 121 122 123 124 125 126 127  | Next Page >