Search Results

Search found 7606 results on 305 pages for 'raam dev'.

Page 291/305 | < Previous Page | 287 288 289 290 291 292 293 294 295 296 297 298  | Next Page >

  • In what (small) ways can I modify Octave's compile options to enhance it without breaking it?

    - by irrational John
    If the title to this question seems a bit vague, I am sorry. But I wasn't sure how to distill what I am attempting to do into a single sentence. A few weeks back I learned that I could build and install recent releases of Octave on an Ubuntu 12.04 system by following the steps below. Install the tools needed to compile, link, and run octave. For Ubuntu the commands below have worked for me. sudo apt-get build-dep octave3.2 sudo apt-get install build-essential gnuplot gtk2-engines-pixbuf sudo apt-get install libfontconfig-dev bison Next, download the source code for an Octave release from the Gnu Project Archives for Octave and unpack the archive into a folder on your system. Use the commands below to build, check, and install octave. ./configure make make check sudo make install Unfortunately it turns out that the above builds an Octave that contains all the debugging symbol tables. The object files alone are huge taking up around 1.7 GB. The current Octave documentation suggests To compile without debugging symbols try the command make CFLAGS=-O CXXFLAGS=-O LDFLAGS= instead of just make. However, when I tried this it did not work. The -g option was still used for the compiles. For the heck of it I instead tried ./configure CFLAGS=-O CXXFLAGS=-O and this did work. (Instead of ~1.7GB the result of the build now takes up around 253MB). My questions are Is this actually the correct (recommended?) method to use to compile Octave without debugging symbols (i.e. without -g)? How would I compile Octave so it uses x86_64 rather than x86? Note: I am not asking how to compile Octave to use the (experimental) 64-bit integers for array dimensions. I just want to allow the compiler to use the extra registers and word sizes available when an app runs in 64-bit mode. Is a (more) complete list available for the directives used with the Octave Makefile? I have only seen make, make check, and make install documented. But apparently make distclean is also allowed. (It removes the compilation results so you can do a complete rebuild of everything.) I'm wondering what else might be available. FWIW, I have tried using ./configure CFLAGS="-O3 -mtune=core2 -m64" CXXFLAGS="-O3 -mtune=core2 -m64" and, surprisingly, it not only appeared to build, but also ran and passed the make check tests. The ./configure script even gave me the (deceptively?) reassuring message "Octave is now configured for x86_64-unknown-linux-gnu". But of course that's not the same thing as saying it actually "works". Is there a recommended way to enable Octave to run as an x86_64 app? I have also tried looking inside the Octave Makefile to see if I could decipher what command line directives it accepts. I got nowhere. I have not a single clue as to how that Makefile does whatever it is that it does.

    Read the article

  • RHCS: GFS2 in A/A cluster with common storage. Configuring GFS with rgmanager

    - by Pavel A
    I'm configuring a two node A/A cluster with a common storage attached via iSCSI, which uses GFS2 on top of clustered LVM. So far I have prepared a simple configuration, but am not sure which is the right way to configure gfs resource. Here is the rm section of /etc/cluster/cluster.conf: <rm> <failoverdomains> <failoverdomain name="node1" nofailback="0" ordered="0" restricted="1"> <failoverdomainnode name="rhc-n1"/> </failoverdomain> <failoverdomain name="node2" nofailback="0" ordered="0" restricted="1"> <failoverdomainnode name="rhc-n2"/> </failoverdomain> </failoverdomains> <resources> <script file="/etc/init.d/clvm" name="clvmd"/> <clusterfs name="gfs" fstype="gfs2" mountpoint="/mnt/gfs" device="/dev/vg-cs/lv-gfs"/> </resources> <service name="shared-storage-inst1" autostart="0" domain="node1" exclusive="0" recovery="restart"> <script ref="clvmd"> <clusterfs ref="gfs"/> </script> </service> <service name="shared-storage-inst2" autostart="0" domain="node2" exclusive="0" recovery="restart"> <script ref="clvmd"> <clusterfs ref="gfs"/> </script> </service> </rm> This is what I mean: when using clusterfs resource agent to handle GFS partition, it is not unmounted by default (unless force_unmount option is given). This way when I issue clusvcadm -s shared-storage-inst1 clvm is stopped, but GFS is not unmounted, so a node cannot alter LVM structure on shared storage anymore, but can still access data. And even though a node can do it quite safely (dlm is still running), this seems to be rather inappropriate to me, since clustat reports that the service on a particular node is stopped. Moreover if I later try to stop cman on that node, it will find a dlm locking, produced by GFS, and fail to stop. I could have simply added force_unmount="1", but I would like to know what is the reason behind the default behavior. Why is it not unmounted? Most of the examples out there silently use force_unmount="0", some don't, but none of them give any clue on how the decision was made. Apart from that I have found sample configurations, where people manage GFS partitions with gfs2 init script - https://alteeve.ca/w/2-Node_Red_Hat_KVM_Cluster_Tutorial#Defining_The_Resources or even as simply as just enabling services such as clvm and gfs2 to start automatically at boot (http://pbraun.nethence.com/doc/filesystems/gfs2.html), like: chkconfig gfs2 on If I understand the latest approach correctly, such cluster only controls whether nodes are still alive and can fence errant ones, but such cluster has no control over the status of its resources. I have some experience with Pacemaker and I'm used to that all resources are controlled by a cluster and an action can be taken when not only there are connectivity issues, but any of the resources misbehave. So, which is the right way for me to go: leave GFS partition mounted (any reasons to do so?) set force_unmount="1". Won't this break anything? Why this is not the default? use script resource <script file="/etc/init.d/gfs2" name="gfs"/> to manage GFS partition. start it at boot and don't include in cluster.conf (any reasons to do so?) This may be a sort of question that cannot be answered unambiguously, so it would be also of much value for me if you shared your experience or expressed your thoughts on the issue. How does for example /etc/cluster/cluster.conf look like when configuring gfs with Conga or ccs (they are not available to me since for now I have to use Ubuntu for the cluster)? Thanks you very much!

    Read the article

  • Transparent proxy which preserves client mac address

    - by A G
    I have a customer that wants to intercept SSL traffic as it leaves their network. My proposed solution is to setup a proxy that is transparent and both layer 2 and layer 3 so it can simply be dropped into their network without any change in config required. The proxy has two NICs, one connected to the server, the other to the client. The client, proxy and gateway are under control of the customer, the server is not. For example: client --- Proxy --- gateway -|- server I have my proxy program configured with IP_TRANSPARENT socket option to it can respond to connections destined for a remote IP. I am using the following setup: iptables -t mangle -A PREROUTING -p tcp --dport 80 -j TPROXY --on-port 3128 --tproxy-mark 1/1 iptables -t mangle -A PREROUTING -p tcp -j MARK --set-mark 1 ip rule add fwmark 1/1 table 1 ip route add local 0.0.0.0/0 dev lo table 1 The client in question is on its own subnet and has been configured so that the proxy is the default gateway. The result is: Client sends a frame to the proxy; source IP is client, source mac is client, destination IP is server, destination mac is proxy Proxy forwards this frame to the gateway; source IP is proxy, source mac is proxy, destination IP is server, destination mac is gateway Gateway forwards this to the server and gets a response back. Gateway sends reply back to proxy; source IP is server, source mac is gateway, destination IP is proxy, destination mac is proxy Proxy forwards this reply to client; source IP is server, source mac is proxy, destination IP is client, destination mac is client. The tproxy and iptables configuration lets the proxy send packets with a non local ip address. Is there a way to make something transparent at the mac address level? That is, put the client on the same subnet as the gateway. The gateway sees the source IP and mac as that of the client, even though they originated from the proxy. Could this be done by configuring the proxy as a bridge then use ebtables to escalate the traffic to be handled by iptables? When I use ebtables to push something up to iptables, it appears my proxy program doesn't respond to the packets as they are destined for the gateways's mac address, not the proxy's. What are some other potential avenues I could investigate? EDIT: When the client and gateway are on different subnets (and client has set the proxy as the gateway), it works as described in 1 to 5. But I want to know if it is possible to have the client and gateway on the same subnet and have the proxy fully transparent (ie client is not aware of the proxy). Thanks! EDIT 2: I can configure the proxy as a bridge using brctl, but cannot find a way to direct this traffic to my proxy program - asked here Possible for linux bridge to intercept traffic?. Currently, with the description numbered 1 to 5, it operates at layer 3; it is transparent on the client side (client thinks it is talking to the server's IP), but not on the gateway side (gateway is talking to the proxy's IP). What I want to find out is, is it possible to make this operate at layer 2, so it is fully transparent? What are the available options I should research? Thanks

    Read the article

  • unattended-upgrades does not reboot

    - by Cheiron
    I am running Debian 7 stable with unattended-upgrades (every morning at 6 AM) to make sure I am always fully updated. I have the following config: $ cat /etc/apt/apt.conf.d/50unattended-upgrades // Automatically upgrade packages from these origin patterns Unattended-Upgrade::Origins-Pattern { // Archive or Suite based matching: // Note that this will silently match a different release after // migration to the specified archive (e.g. testing becomes the // new stable). "o=Debian,a=stable"; "o=Debian,a=stable-updates"; // "o=Debian,a=proposed-updates"; "origin=Debian,archive=stable,label=Debian-Security"; }; // List of packages to not update Unattended-Upgrade::Package-Blacklist { // "vim"; // "libc6"; // "libc6-dev"; // "libc6-i686"; }; // This option allows you to control if on a unclean dpkg exit // unattended-upgrades will automatically run // dpkg --force-confold --configure -a // The default is true, to ensure updates keep getting installed //Unattended-Upgrade::AutoFixInterruptedDpkg "false"; // Split the upgrade into the smallest possible chunks so that // they can be interrupted with SIGUSR1. This makes the upgrade // a bit slower but it has the benefit that shutdown while a upgrade // is running is possible (with a small delay) //Unattended-Upgrade::MinimalSteps "true"; // Install all unattended-upgrades when the machine is shuting down // instead of doing it in the background while the machine is running // This will (obviously) make shutdown slower //Unattended-Upgrade::InstallOnShutdown "true"; // Send email to this address for problems or packages upgrades // If empty or unset then no email is sent, make sure that you // have a working mail setup on your system. A package that provides // 'mailx' must be installed. E.g. "[email protected]" Unattended-Upgrade::Mail "root"; // Set this value to "true" to get emails only on errors. Default // is to always send a mail if Unattended-Upgrade::Mail is set Unattended-Upgrade::MailOnlyOnError "true"; // Do automatic removal of new unused dependencies after the upgrade // (equivalent to apt-get autoremove) //Unattended-Upgrade::Remove-Unused-Dependencies "false"; // Automatically reboot *WITHOUT CONFIRMATION* if a // the file /var/run/reboot-required is found after the upgrade Unattended-Upgrade::Automatic-Reboot "true"; // Use apt bandwidth limit feature, this example limits the download // speed to 70kb/sec //Acquire::http::Dl-Limit "70"; As you can see Automatic-Reboot is true and thus the server should automaticly reboot. Last time I checked the server was online for over 100 days, which means that the update from Debian 7.1 to Debian 7.2 has happened while the server was up (and indeed, all updates were installed), but this involves kernel updates, which means that the server should reboot. It did not. The server was running very slow, so I rebooted which fixed that. I did some research and found out that unattended-upgrades responds to the reboot-required file in /var/run/. I touched this file and waited one week, the file still exists and the server did not reboot. So I think that unattended-uppgrades ignores the auto-reboot part. So, am I doing somthing wrong here? Why did the server not restart? The upgrade part works perfect by the way, its just the reboot part that does not seem to work as it should.

    Read the article

  • iCloud stuff stops working while connected to OpenVPN

    - by Taco Bob
    I have a fairly simple OpenVPN setup on an OpenVZ VPS with Ubuntu 11.10. Client is the Viscosity client on Mac OS X 10.8.2, and after some testing, we can rule out the client as being part of the problem. Everything has been working fine except for Apple's iCloud stuff. Web surfing, email, FTP, NNTP, and Skype are all working as expected. It's ONLY the iCloud services that cease to function. If I connect to the VPN, iCloud stuff stops working. I no longer get anything in Messages, Calendar items don't get updated, and Notifications stop working. If I disconnect, the iCloud stuff all starts working. Connect again, iCloud stops working. Here's the server.conf: status openvpn-status.log log /var/log/openvpn.log verb 4 port 1194 proto udp dev tun ca /etc/openvpn/ca.crt cert /etc/openvpn/server.crt key /etc/openvpn/server.key dh /etc/openvpn/dh1024.pem server 10.9.8.0 255.255.255.0 ifconfig-pool-persist ipp.txt push "redirect-gateway def1" push “dhcp-option DNS 10.9.8.1? keepalive 10 120 duplicate-cn cipher BF-CBC comp-lzo user nobody group nogroup persist-key persist-tun tun-mtu 1500 mssfix 1400 I'm using iptables in a script, and it's also fairly simplistic. iptables -F iptables -t nat -F iptables -t mangle -F iptables -A FORWARD -i tun0 -o venet0 -j ACCEPT iptables -A FORWARD -i venet0 -o tun0 -j ACCEPT iptables -A INPUT -p tcp --dport 22 -j ACCEPT iptables -A INPUT -p tcp --dport 1194 -j ACCEPT iptables -A INPUT -p udp --dport 1194 -j ACCEPT iptables -t nat -A POSTROUTING -s 10.9.8.0/24 -j SNAT --to-source <server's public ip> echo 1 > /proc/sys/net/ipv4/ip_forward I tried forwarding ports as well, with no success. iptables -A FORWARD -p tcp -d 10.9.8.0/24 --dport 5222:5230 -j ACCEPT iptables -t nat -A PREROUTING -p tcp --dport 5222:5230 -j DNAT --to-destination 10.9.8.6 I am also sometimes behind a double-NAT situation that I have no control over. Client -> work VPN -> my OpenVPN box -> Internet. Client -> Airport Express -> ISP (which is doing NAT) -> my OpenVPN box -> Internet. Those two situations are just the fact of life where I am, and I cannot change them. I do have full control over my client and the OpenVPN server. I am completely out of ideas. I have posted a similar query at the OpenVPN forums, but it hasn't posted yet and seems to be in their moderation queue still. Tried on freenode irc channels, but nobody is awake, so here I am. I have Googled extensively for this, and can find nothing that is related. Help me get iCloud stuff working again! (I tried serverfault, it was closed as off-topic. I'm trying here and the Unix site as well. Here because it's a more general audience that might know more about OpenVPN based on the number of questions I see asked about it) EDIT: -I have also tried upgrading to Version: 2.3-beta1-debian0 - issue persists. -Removed all iptables rules except for the ones that flush -left this rule:iptables -t nat -A POSTROUTING -s 10.9.8.0/24 -j SNAT --to-source (server ip) -added iptables -A FORWARD -m state --state RELATED,ESTABLISHED -j ACCEPT still, nothing works. I can see traffic in tcpdump on the server if i watch the tunnel: 20:03:48.702835 IP nk11p01st-courier105-bz.push.apple.com.5223 10.9.8.6.60772: Flags [F.], seq 2635, ack 1218, win 76, options [nop,nop,TS val 914984811 ecr 745921298], length 0 20:03:48.911244 IP 10.9.8.6.60772 nk11p01st-courier105-bz.push.apple.com.5223: Flags [R], seq 3621143451, win 0, length 0 But still, no push messages/notifications are ever delivered. :/ EDIT: * Further testing indicates that it might actually be the client after all.

    Read the article

  • Forwarding rsyslog to syslog-ng, with FQDN and facility separation

    - by Joshua Miller
    I'm attempting to configure my rsyslog clients to forward messages to my syslog-ng log repository systems. Forwarding messages works "out of the box", but my clients are logging short names, not FQDNs. As a result the messages on the syslog repo use short names as well, which is a problem because one can't determine which system the message originated from easily. My clients get their names through DHCP / DNS. I've tried a number of solutions trying to get this working, but without success. I'm using rsyslog 4.6.2 and syslog-ng 3.2.5. I've tried setting $PreserveFQDN on as the first directive in /etc/rsyslog.conf (and restarting rsyslog of course). It seems to have no effect. hostname --fqdn on the client returns the proper FQDN, so the problem isn't whether the system can actually figure out its own FQDN. $LocalHostName <fqdn> looked promising, but this directive isn't available in my version of rsyslog (Available since 4.7.4+, 5.7.3+, 6.1.3+). Upgrading isn't an option at the moment. Configuring the syslog-ng server to populate names based on reverse lookups via DNS isn't an option. There are complexities with reverse DNS and the public cloud. Specifying for the forwarder to use a custom template seems like a viable option at first glance. I can specify the following, which causes local logging to begin using the FQDN on the syslog-ng repo. $template MyTemplate, "%timestamp% <FQDN> %syslogtag%%msg%" $ActionForwardDefaultTemplate MyTemplate However, when I put this in place syslog-ng seems to be unable to categorize messages by facility or priority. Messages come in as FQDN, but everything is put in to user.log. When I don't use the custom template, messages are properly categorized under facility and priority, but with the short name. So, in summary, if I manually trick rsyslog into including the FQDN, priority and facility becomes lost details to syslog-ng. How can I get rsyslog to do FQDN logging which works properly going to a syslog-ng repository? rsyslog client config: $ModLoad imuxsock.so # provides support for local system logging (e.g. via logger command) $ModLoad imklog.so # provides kernel logging support (previously done by rklogd) $ActionFileDefaultTemplate RSYSLOG_TraditionalFileFormat *.info;mail.none;authpriv.none;cron.none /var/log/messages authpriv.* /var/log/secure mail.* -/var/log/maillog cron.* /var/log/cron *.emerg * uucp,news.crit /var/log/spooler local7.* /var/log/boot.log $WorkDirectory /var/spool/rsyslog # where to place spool files $ActionQueueFileName fwdRule1 # unique name prefix for spool files $ActionQueueMaxDiskSpace 1g # 1gb space limit (use as much as possible) $ActionQueueSaveOnShutdown on # save messages to disk on shutdown $ActionQueueType LinkedList # run asynchronously $ActionResumeRetryCount -1 # infinite retries if host is down *.* @syslog-ng1.example.com *.* @syslog-ng2.example.com syslog-ng configuration (abridged for brevity): options { flush_lines (0); time_reopen (10); log_fifo_size (1000); long_hostnames (off); use_dns (no); use_fqdn (yes); create_dirs (no); keep_hostname (yes); }; source src { unix-stream("/dev/log"); internal(); udp(ip(0.0.0.0) port(514)); }; destination per_host_destination { file( "/var/log/syslog-ng/devices/$HOST/$FACILITY.log" owner("root") group("root") perm(0644) dir_owner(root) dir_group(root) dir_perm(0775) create_dirs(yes)); }; log { source(src); destination(per_facility_destination); };

    Read the article

  • /usr/bin/sshd isn't linked against PAM on one of my systems. What is wrong and how can I fix it?

    - by marc.riera
    Hi, I'm using AD as my user account server with ldap. Most of the servers run with UsePam yes except this one, it has lack of pam support on sshd. root@linserv9:~# ldd /usr/sbin/sshd linux-vdso.so.1 => (0x00007fff621fe000) libutil.so.1 => /lib/libutil.so.1 (0x00007fd759d0b000) libz.so.1 => /usr/lib/libz.so.1 (0x00007fd759af4000) libnsl.so.1 => /lib/libnsl.so.1 (0x00007fd7598db000) libcrypto.so.0.9.8 => /usr/lib/libcrypto.so.0.9.8 (0x00007fd75955b000) libcrypt.so.1 => /lib/libcrypt.so.1 (0x00007fd759323000) libc.so.6 => /lib/libc.so.6 (0x00007fd758fc1000) libdl.so.2 => /lib/libdl.so.2 (0x00007fd758dbd000) /lib64/ld-linux-x86-64.so.2 (0x00007fd759f0e000) I have this packages installed root@linserv9:~# dpkg -l|grep -E 'pam|ssh' ii denyhosts 2.6-2.1 an utility to help sys admins thwart ssh hac ii libpam-modules 0.99.7.1-5ubuntu6.1 Pluggable Authentication Modules for PAM ii libpam-runtime 0.99.7.1-5ubuntu6.1 Runtime support for the PAM library ii libpam-ssh 1.91.0-9.2 enable SSO behavior for ssh and pam ii libpam0g 0.99.7.1-5ubuntu6.1 Pluggable Authentication Modules library ii libpam0g-dev 0.99.7.1-5ubuntu6.1 Development files for PAM ii openssh-blacklist 0.1-1ubuntu0.8.04.1 list of blacklisted OpenSSH RSA and DSA keys ii openssh-client 1:4.7p1-8ubuntu1.2 secure shell client, an rlogin/rsh/rcp repla ii openssh-server 1:4.7p1-8ubuntu1.2 secure shell server, an rshd replacement ii quest-openssh 5.2p1_q13-1 Secure shell root@linserv9:~# What I'm doing wrong? thanks. Edit: root@linserv9:~# cat /etc/pam.d/sshd # PAM configuration for the Secure Shell service # Read environment variables from /etc/environment and # /etc/security/pam_env.conf. auth required pam_env.so # [1] # In Debian 4.0 (etch), locale-related environment variables were moved to # /etc/default/locale, so read that as well. auth required pam_env.so envfile=/etc/default/locale # Standard Un*x authentication. @include common-auth # Disallow non-root logins when /etc/nologin exists. account required pam_nologin.so # Uncomment and edit /etc/security/access.conf if you need to set complex # access limits that are hard to express in sshd_config. # account required pam_access.so # Standard Un*x authorization. @include common-account # Standard Un*x session setup and teardown. @include common-session # Print the message of the day upon successful login. session optional pam_motd.so # [1] # Print the status of the user's mailbox upon successful login. session optional pam_mail.so standard noenv # [1] # Set up user limits from /etc/security/limits.conf. session required pam_limits.so # Set up SELinux capabilities (need modified pam) # session required pam_selinux.so multiple # Standard Un*x password updating. @include common-password Edit2: UsePAM yes fails With this configuration ssh fails to start : root@linserv9:/home/admmarc# cat /etc/ssh/sshd_config |grep -vE "^[ \t]*$|^#" Port 22 Protocol 2 ListenAddress 0.0.0.0 RSAAuthentication yes PubkeyAuthentication yes AuthorizedKeysFile .ssh/authorized_keys ChallengeResponseAuthentication yes UsePAM yes Subsystem sftp /usr/lib/sftp-server root@linserv9:/home/admmarc# The error it gives is as follows root@linserv9:/home/admmarc# /etc/init.d/ssh start * Starting OpenBSD Secure Shell server sshd /etc/ssh/sshd_config: line 75: Bad configuration option: UsePAM /etc/ssh/sshd_config: terminating, 1 bad configuration options ...fail! root@linserv9:/home/admmarc#

    Read the article

  • mysqld service crashes on restart, after importing mysqldump #innodb

    - by ubunut
    I have 2 mysql servers. Let's call them server01 & server02. Both have the same configuration: mysqladmin Ver 8.42 Distrib 5.1.61, for redhat-linux-gnu on x86_64 [client] default-character-set=utf8 [mysqld] datadir=/var/lib/mysql socket=/var/lib/mysql/mysql.sock user=mysql # Disabling symbolic-links is recommended to prevent assorted security risks symbolic-links=0 max_allowed_packet = 16M default-character-set=utf8 default-collation=utf8_unicode_ci character-set-server=utf8 collation-server=utf8_unicode_ci default-storage-engine = InnoDB innodb_data_home_dir = /var/lib/mysql innodb_log_group_home_dir = /var/lib/mysql innodb_data_file_path = ibdata1:10M:autoextend innodb_additional_mem_pool_size = 2M innodb_log_file_size = 5M innodb_log_buffer_size = 8M innodb_lock_wait_timeout = 50 innodb_flush_log_at_trx_commit = 1 innodb_buffer_pool_size = 700M table_cache = 300 thread_cache_size = 4 query_cache_size = 200m query_cache_limit = 10m [mysqld_safe] log-error=/var/log/mysqld.log pid-file=/var/run/mysqld/mysqld.pid I make a mysqldump on server01: mysqldump -uuser -ppassword --all-databases testservers.sql (most tables in these databases are innodb, some of the mysql.* tables are Innodb too) Then I import the testservers.sql on server02: mysql -uuser < testservers.sql (mysqld has been started with --skip-network). So far so good, I can login into mysql & everything seems to be ok. BUT when I exit to the shell and execute service mysqld restart, The service fails to start. stack-trace in /var/log/mysqld.log: 121022 14:53:19 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql 121022 14:53:19 [Warning] '--default-character-set' is deprecated and will be removed in a future release. Please use '--character-set-server' instead. 121022 14:53:19 [Warning] '--default-collation' is deprecated and will be removed in a future release. Please use '--collation-server' instead. 12:53:19 UTC - mysqld got signal 11 ; This could be because you hit a bug. It is also possible that this binary or one of the libraries it was linked against is corrupt, improperly built, or misconfigured. This error can also be caused by malfunctioning hardware. We will try our best to scrape up some info that will hopefully help diagnose the problem, but since we have already crashed, something is definitely wrong and this may fail. key_buffer_size=8384512 read_buffer_size=131072 max_used_connections=0 max_threads=151 thread_count=0 connection_count=0 It is possible that mysqld could use up to key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 338324 K bytes of memory Hope that's ok; if not, decrease some variables in the equation. Thread pointer: 0x267e630 Attempting backtrace. You can use the following information to find out where mysqld died. If you see no messages after this, something went terribly wrong... stack_bottom = 7fff3efe0be0 thread_stack 0x40000 /usr/libexec/mysqld(my_print_stacktrace+0x29) [0x84bd89] /usr/libexec/mysqld(handle_fatal_signal+0x483) [0x6a0be3] /lib64/libpthread.so.0() [0x338d60f500] /usr/libexec/mysqld(ha_resolve_by_name(THD*, st_mysql_lex_string const*)+0x81) [0x6956e1] /usr/libexec/mysqld(open_table_def(THD*, st_table_share*, unsigned int)+0xe0a) [0x60e5ba] /usr/libexec/mysqld(get_table_share(THD*, TABLE_LIST*, char*, unsigned int, unsigned int, int*)+0x20b) [0x602b0b] /usr/libexec/mysqld() [0x603597] /usr/libexec/mysqld(open_table(THD*, TABLE_LIST*, st_mem_root*, bool*, unsigned int)+0x7a1) [0x6079a1] /usr/libexec/mysqld(open_tables(THD*, TABLE_LIST**, unsigned int*, unsigned int)+0x5d0) [0x608570] /usr/libexec/mysqld(open_and_lock_tables_derived(THD*, TABLE_LIST*, bool)+0x6a) [0x60877a] /usr/libexec/mysqld(plugin_init(int*, char**, int)+0x622) [0x715af2] /usr/libexec/mysqld() [0x5bd3b2] /usr/libexec/mysqld(main+0x1b3) [0x5bfc93] /lib64/libc.so.6(__libc_start_main+0xfd) [0x338d21ecdd] /usr/libexec/mysqld() [0x5087b9] Trying to get some variables. Some pointers may be invalid and cause the dump to abort. Query (0): is an invalid pointer Connection ID (thread ID): 0 Status: NOT_KILLED The manual page at http://dev.mysql.com/doc/mysql/en/crashing.html contains information that should help you find out what is causing the crash. 121022 14:53:19 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended A typical mysqdump entry looks like this: DROP TABLE IF EXISTS `adodb_logsql`; /*!40101 SET @saved_cs_client = @@character_set_client */; /*!40101 SET character_set_client = utf8 */; CREATE TABLE `adodb_logsql` ( `id` bigint(10) unsigned NOT NULL AUTO_INCREMENT, `created` datetime NOT NULL, `sql0` varchar(250) NOT NULL DEFAULT '', `sql1` text, `params` text, `tracer` text, `timer` decimal(16,6) NOT NULL DEFAULT '0.000000', PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT='to save some logs from ADOdb'; /*!40101 SET character_set_client = @saved_cs_client */; IF I change all occurrences of "ENGINE=InnoDB" to "ENGINE=MyISAM" before import, then the service has no problem restarting. I'm quite puzzled as to what's happening, maybe I'm just an idiot, then by all means tell me so. Any help would be greatly appreciated!

    Read the article

  • Router 2wire, Slackware desktop in DMZ mode, iptables policy aginst ping, but still pingable

    - by user135501
    I'm in DMZ mode, so I'm firewalling myself, stealthy all ok, but I get faulty test results from Shields Up that there are pings. Yesterday I couldn't make a connection to game servers work, because ping block was enabled (on the router). I disabled it, but this persists even due to my firewall. What is the connection between me and my router in DMZ mode (for my machine, there is bunch of others too behind router firewall)? When it allows router affecting if I'm pingable or not and if router has setting not blocking ping, rules in my iptables for this scenario do not work. Please ignore commented rules, I do uncomment them as I want. These two should do the job right? iptables -A INPUT -p icmp --icmp-type echo-request -j DROP echo 1 > /proc/sys/net/ipv4/icmp_echo_ignore_all Here are my iptables: #!/bin/sh # Begin /bin/firewall-start # Insert connection-tracking modules (not needed if built into the kernel). #modprobe ip_tables #modprobe iptable_filter #modprobe ip_conntrack #modprobe ip_conntrack_ftp #modprobe ipt_state #modprobe ipt_LOG # allow local-only connections iptables -A INPUT -i lo -j ACCEPT # free output on any interface to any ip for any service # (equal to -P ACCEPT) iptables -A OUTPUT -j ACCEPT # permit answers on already established connections # and permit new connections related to established ones (eg active-ftp) iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT #Gamespy&NWN #iptables -A INPUT -p tcp -m tcp -m multiport --ports 5120:5129 -j ACCEPT #iptables -A INPUT -p tcp -m tcp --dport 6667 --tcp-flags SYN,RST,ACK SYN -j ACCEPT #iptables -A INPUT -p tcp -m tcp --dport 28910 --tcp-flags SYN,RST,ACK SYN -j ACCEPT #iptables -A INPUT -p tcp -m tcp --dport 29900 --tcp-flags SYN,RST,ACK SYN -j ACCEPT #iptables -A INPUT -p tcp -m tcp --dport 29901 --tcp-flags SYN,RST,ACK SYN -j ACCEPT #iptables -A INPUT -p tcp -m tcp --dport 29920 --tcp-flags SYN,RST,ACK SYN -j ACCEPT #iptables -A INPUT -p udp -m udp -m multiport --ports 5120:5129 -j ACCEPT #iptables -A INPUT -p udp -m udp --dport 6500 -j ACCEPT #iptables -A INPUT -p udp -m udp --dport 27900 -j ACCEPT #iptables -A INPUT -p udp -m udp --dport 27901 -j ACCEPT #iptables -A INPUT -p udp -m udp --dport 29910 -j ACCEPT # Log everything else: What's Windows' latest exploitable vulnerability? iptables -A INPUT -j LOG --log-prefix "FIREWALL:INPUT" # set a sane policy: everything not accepted > /dev/null iptables -P INPUT DROP iptables -P FORWARD DROP iptables -P OUTPUT DROP iptables -A INPUT -p icmp --icmp-type echo-request -j DROP # be verbose on dynamic ip-addresses (not needed in case of static IP) echo 2 > /proc/sys/net/ipv4/ip_dynaddr # disable ExplicitCongestionNotification - too many routers are still # ignorant echo 0 > /proc/sys/net/ipv4/tcp_ecn #ping death echo 1 > /proc/sys/net/ipv4/icmp_echo_ignore_all # If you are frequently accessing ftp-servers or enjoy chatting you might # notice certain delays because some implementations of these daemons have # the feature of querying an identd on your box for your username for # logging. Although there's really no harm in this, having an identd # running is not recommended because some implementations are known to be # vulnerable. # To avoid these delays you could reject the requests with a 'tcp-reset': #iptables -A INPUT -p tcp --dport 113 -j REJECT --reject-with tcp-reset #iptables -A OUTPUT -p tcp --sport 113 -m state --state RELATED -j ACCEPT # To log and drop invalid packets, mostly harmless packets that came in # after netfilter's timeout, sometimes scans: #iptables -I INPUT 1 -p tcp -m state --state INVALID -j LOG --log-prefix \ "FIREWALL:INVALID" #iptables -I INPUT 2 -p tcp -m state --state INVALID -j DROP # End /bin/firewall-start

    Read the article

  • Outlook Web Access, reverse proxy and browser

    - by M'vy
    Hi SF'ers! We recently moved an exchange server behind a reverse proxy due to the loss of a public IP. I've managed to configure the reverse proxy (httpd proxy_http). But there is a problem for the SSL configuration. When accessing the OWA interface with Firefox, all is ok and working. When accessing with MSIE or Chrome, they do not retrieve the good SSL Certificate. I think this is due to the multiples virtual host for httpd. Is there a workaround to make sure MSIE/Chrome request the certificate for the good domain name like FF does? Already tested with the SSL virtual host : SetEnvIf User-Agent ".*MSIE.*" value BrowserMSIE Header unset WWW-Authenticate Header add WWW-Authenticate "Basic realm=exchange.domain.com" A: ProxyPreserveHost On also: BrowserMatch ".*MSIE.*" \ nokeepalive ssl-unclean-shutdown \ downgrade-1.0 force-response-1.0 Or: SetEnvIf User-Agent ".*MSIE.*" \ nokeepalive ssl-unclean-shutdown \ downgrade-1.0 force-response-1.0 And lots of ProxyPassand ProxyReversePath on /exchweb /exchange /public etc... And it still don't seem to work. Any clue? Thanks. Edit 1: Precision of versions # openssl version OpenSSL 0.9.8k-fips 25 Mar 2009 /usr/sbin/httpd -v Server version: Apache/2.2.11 (Unix) Server built: Mar 17 2009 09:15:10 Browser versions : MSIE : 8.0.6001 Opera: Version 11.01 Revision 1190 Firefox: 3.6.15 Chrome: 10.0.648.151 Operating System: Windows Vista 32bits. They are all SNI compliant, I've tested them this afternoon https://sni.velox.ch/ You're right Shane Madden, I have multiple sites on the same public IP (and same port as well). The server itself is just a reverse proxy, that rewrite addresses to internal servers. The default host is a dev site, configure with the certificate that does not match the OWA (of course... would have been to easy) <VirtualHost *:443> ServerName dev2.domain.com ServerAdmin [email protected] CustomLog "| /usr/sbin/rotatelogs /var/log/httpd/access-%y%m%d.log 86400" combined ErrorLog "| /usr/sbin/rotatelogs /var/log/httpd/error-%y%m%d.log 86400" LogLevel warn RewriteEngine on SetEnvIfNoCase X-Forwarded-For .+ proxy=yes SSLEngine on SSLProtocol -all +SSLv3 +TLSv1 SSLCipherSuite ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP:+eNULL:+SSLv3 SSLCertificateFile /etc/httpd/ssl/domain.com.crt SSLCertificateKeyFile /etc/httpd/ssl/domain.com.key RewriteCond %{HTTP_HOST} dev2\.domain\.com RewriteRule ^/(.*)$ http://dev2.domain.com/$1 [L,P] </VirtualHost> The certificate of domain is a *.domain.com The second vHost is : <VirtualHost *:443> ServerName exchange.domain2.com ServerAdmin [email protected] CustomLog "| /usr/sbin/rotatelogs /var/log/httpd/exchange/access-%y%m%d.log 86400" combined ErrorLog "| /usr/sbin/rotatelogs /var/log/httpd/exchange/error-%y%m%d.log 86400" LogLevel warn SSLEngine on SSLProxyEngine On SSLProtocol -all +SSLv3 +TLSv1 SSLCipherSuite ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP:+eNULL:+SSLv3 SSLCertificateFile /etc/httpd/ssl/exchange.pem SSLCertificateKeyFile /etc/httpd/ssl/exchange.key RewriteEngine on SetEnvIfNoCase X-Forwarded-For .+ proxy=yes RewriteCond %{HTTP_HOST} exchange\.domain2\.com RewriteRule ^/(.*)$ https://exchange.domain2.com/$1 [L,P] </VirtualHost> and it's certificate is exchange.domain2.com only. I presume the SNI is somewhere not activated on my server. The versions of openssl and apache seams to be ok for the SNI support. The only thing I do not know is if httpd has been compile with the good options. (I assume it's a fedora packet).

    Read the article

  • ovs-vsctl: "eth0" is not a valid UUID

    - by Przemek Lach
    I'm trying to setup an open v-switch inside my Ubuntu 12.04 Server VM. I have created three interfaces for this VM and I want to create a port mirror inside of the VM using these there interfaces and open v-switch. There are three Host-Only Adapters: eth0, eth1, eth2. The idea is that three other VM's will be connected to these adapters. One of these VM's will stream UDP video to eth0 and I want the vswitch'd VM to mirror those packets from eth0 onto eth1 and eth2. Each of the VM's connected to eth1 and eth2 will get the same video stream. I performed the following steps to install open v-switch: $ apt-get install python-simplejson python-qt4 python-twisted-conch automake autoconf gcc uml-utilities libtool build-essential $ apt-get install build-essential autoconf automake pkg-config $ wget http://openvswitch.org/releases/openvswitch-1.7.1.tar.gz $ tar xf http://openvswitch.org/releases/openvswitch-1.7.1.tar.gz $ cd http://openvswitch.org/releases/openvswitch-1.7.1.tar.gz $ apt-get install libssl-dev iproute tcpdump linux-headers-`uname -r` $ ./boot.sh $ ./configure - -with-linux=/lib/modules/`uname -r`/build $ make $ sudo make install After installation I configured as follows: $ insmod datapath/linux/openvswitch.ko $ sudo touch /usr/local/etc/ovs-vswitchd.conf $ mkdir -p /usr/local/etc/openvswitch $ ovsdb-tool create /usr/local/etc/openvswitch/conf.db Then I started the server: $ ovsdb-server /usr/local/etc/openvswitch/conf.db \ --remote=punix:/usr/local/var/run/openvswitch/db.sock \ --remote=db:Open_vSwitch,manager_options \ --private-key=db:SSL,private_key \ --certificate=db:SSL,certificate \ --bootstrap-ca-cert=db:SSL,ca_cert --pidfile --detach --log-file $ ovs-vsctl –no-wait init (run only once) $ ovs-vswitchd --pidfile --detach The above steps I got from this tutorial and it all worked fine. I then proceeded to add a port mirror based on the open v-switch documentation under Port Mirroring. I successfully completed the following commands: $ ovs-vsctl add-br br0 $ ovs-vsctl add-port br0 eth0 $ ovs-vsctl add-port br0 eth1 $ ovs-vsctl add-port br0 eth2 $ ifconfig eth0 promisc up $ ifconfig eth1 promisc up $ ifconfig eth2 promisc up At this point when I run ovs-vsctl show I get the following: 75bda8c2-b870-438b-9115-e36288ea1cd8 Bridge "br0" Port "br0" Interface "br0" type: internal Port "eth0" Interface "eth0" Port "eth2" Interface "eth2" Port "eth1" Interface "eth1" And when I run ifconfig I get the following: eth0 Link encap:Ethernet HWaddr 08:00:27:9f:51:ca inet6 addr: fe80::a00:27ff:fe9f:51ca/64 Scope:Link UP BROADCAST RUNNING PROMISC MULTICAST MTU:1500 Metric:1 RX packets:17 errors:0 dropped:0 overruns:0 frame:0 TX packets:6 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:1494 (1.4 KB) TX bytes:468 (468.0 B) eth1 Link encap:Ethernet HWaddr 08:00:27:53:02:d4 inet6 addr: fe80::a00:27ff:fe53:2d4/64 Scope:Link UP BROADCAST RUNNING PROMISC MULTICAST MTU:1500 Metric:1 RX packets:17 errors:0 dropped:0 overruns:0 frame:0 TX packets:6 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:1494 (1.4 KB) TX bytes:468 (468.0 B) eth2 Link encap:Ethernet HWaddr 08:00:27:cb:a5:93 inet6 addr: fe80::a00:27ff:fecb:a593/64 Scope:Link UP BROADCAST RUNNING PROMISC MULTICAST MTU:1500 Metric:1 RX packets:17 errors:0 dropped:0 overruns:0 frame:0 TX packets:6 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:1494 (1.4 KB) TX bytes:468 (468.0 B) eth3 Link encap:Ethernet HWaddr 08:00:27:df:bb:d8 inet addr:192.168.1.139 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::a00:27ff:fedf:bbd8/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:2211 errors:0 dropped:0 overruns:0 frame:0 TX packets:1196 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:182987 (182.9 KB) TX bytes:125441 (125.4 KB) NOTE: I use eth3 as a bridge adapter for SSH'ing into the VM. So now, I think I've done everything correctly but when I try to create the bridge using the following command: $ ovs-vsctl -- set Bridge br0 mirrors=@m -- --id=@eth0 get Port eth0 -- --id=@eth1 get Port eth1 -- --id=@m create Mirror name=app1Mirror select-dst-port=eth0 select-src-port=@eth0 output-port=@eth1,eth2 I get the following error: ovs-vsctl: "eth0" is not a valid UUID I don't understand why it's not able to find the interfaces?

    Read the article

  • Can't communicate between lan ports on openwrt router

    - by ScaryAardvark
    I've got a WBMR-HP-G300H Buffalo Airstation router on which I've installed the lates OpenWRT software. All is working well (ADSL, WIFI etc) except for one niggle. I can't communicate between lan ports. i.e. if I have one computer connected on lan port 1 and I try to ping another computer on lan port 2 then I get "destination unreachable". I can ping both computers from the router itself and can also ping each computer from a seperate laptop connected wirelessly. All computers are in the same subnet range (10.0.0.?/24). I suspect that I may need to configure a vlan on the switch but everytime I try and do this with various google'ed configuration I keep freezing out all lan-ports and I have to revert back using a wirelessly connected laptop. Here's my /etc/config/network: config interface 'loopback' option ifname 'lo' option proto 'static' option ipaddr '127.0.0.1' option netmask '255.0.0.0' config interface 'lan' option type 'bridge' option proto 'static' option netmask '255.255.255.0' option ipaddr '10.0.0.1' option _orig_ifname 'eth0 wlan0' option _orig_bridge 'true' option ifname 'eth0' config adsl-device 'adsl' option fwannex 'a' option annex 'a2p' config interface 'wan' option _orig_ifname 'nas0' option _orig_bridge 'false' option proto 'pppoa' option encaps 'vc' option atmdev '0' option vci '38' option vpi '0' option username '?????????????' option password '??????????????' Any help would be warmly received. Here's some more config stuff. root@OpenWrt:~# ifconfig -a br-lan Link encap:Ethernet HWaddr 00:24:A5:BD:66:08 inet addr:10.0.0.1 Bcast:10.0.0.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:226576 errors:0 dropped:346 overruns:0 frame:0 TX packets:269292 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:26771676 (25.5 MiB) TX bytes:183986450 (175.4 MiB) eth0 Link encap:Ethernet HWaddr 00:24:A5:BD:66:08 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) ifb0 Link encap:Ethernet HWaddr 36:60:EC:DF:13:A1 BROADCAST NOARP MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:32 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) ifb1 Link encap:Ethernet HWaddr 4A:7B:75:67:54:E0 BROADCAST NOARP MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:32 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:780 errors:0 dropped:0 overruns:0 frame:0 TX packets:780 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:58369 (57.0 KiB) TX bytes:58369 (57.0 KiB) mon.wlan0 Link encap:UNSPEC HWaddr 00-24-A5-BD-66-08-00-48-00-00-00-00-00-00-00-00 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:2424 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:32 RX bytes:320188 (312.6 KiB) TX bytes:0 (0.0 B) pppoa-wan Link encap:Point-to-Point Protocol inet addr:81.136.179.204 P-t-P:81.134.80.1 Mask:255.255.255.255 UP POINTOPOINT RUNNING NOARP MULTICAST MTU:1500 Metric:1 RX packets:258894 errors:0 dropped:0 overruns:0 frame:0 TX packets:212976 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:3 RX bytes:177341656 (169.1 MiB) TX bytes:25192459 (24.0 MiB) wlan0 Link encap:Ethernet HWaddr 00:24:A5:BD:66:08 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:204063 errors:0 dropped:0 overruns:0 frame:0 TX packets:245516 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:32 RX bytes:26613140 (25.3 MiB) TX bytes:162799765 (155.2 MiB) root@OpenWrt:~# brctl show bridge name bridge id STP enabled interfaces br-lan 8000.0024a5bd6608 no wlan0 eth0 root@OpenWrt:~# swconfig dev eth0 show Global attributes: enable_vlan: 0 Port 0: pvid: 0 link: port:0 link:up speed:1000baseT full-duplex txflow rxflow Port 1: pvid: 0 link: port:1 link:down Port 2: pvid: 0 link: port:2 link:down Port 3: pvid: 0 link: port:3 link:down Port 4: pvid: 0 link: port:4 link:up speed:100baseT full-duplex txflow rxflow auto Port 5: pvid: 0 link: port:5 link:up speed:100baseT full-duplex txflow rxflow auto Regards Mark.

    Read the article

  • Is this valid JFS partition?

    - by Coolmax
    This is my first question on StackExchange. My teacher gave my his laptop (with Fedora 16 on it) and compact flash card with data. He want to have access to files on card, but he couldn't get access to it. The problem is Linux don't know what type of partion is. I suppose there is JFS: root@debian:~# dmesg |grep sdc [ 9066.908223] sd 3:0:0:1: [sdc] 3940272 512-byte logical blocks: (2.01 GB/1.87 GiB) [ 9066.962307] sd 3:0:0:1: [sdc] Write Protect is off [ 9066.962310] sd 3:0:0:1: [sdc] Mode Sense: 03 00 00 00 [ 9066.962312] sd 3:0:0:1: [sdc] Assuming drive cache: write through [ 9067.028420] sd 3:0:0:1: [sdc] Assuming drive cache: write through [ 9067.028637] sdc: unknown partition table [ 9067.097065] sd 3:0:0:1: [sdc] Assuming drive cache: write through [ 9067.097281] sd 3:0:0:1: [sdc] Attached SCSI removable disk and some of data: root@debian:~# hexdump -Cn 65536 /dev/sdc 00000000 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| * 00008000 4a 46 53 31 01 00 00 00 48 63 0e 00 00 00 00 00 |JFS1....Hc......| 00008010 00 10 00 00 0c 00 03 00 00 02 00 00 09 00 00 00 |................| 00008020 00 20 00 00 00 09 20 10 02 00 00 00 00 00 00 00 |. .... .........| 00008030 04 00 00 00 26 00 00 00 02 00 00 00 24 00 00 00 |....&.......$...| 00008040 41 03 00 00 16 00 00 00 00 02 00 00 a0 cc 01 00 |A...............| 00008050 37 00 00 00 69 cc 01 00 b6 d8 ac 4b 00 00 00 00 |7...i......K....| 00008060 32 00 00 00 02 00 00 00 00 00 00 00 00 00 00 00 |2...............| 00008070 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| 00008080 00 00 00 00 00 00 00 00 90 15 e5 5f e3 c4 45 fa |..........._..E.| 00008090 9d 6a 5c b5 4f da 62 1a 00 00 00 00 00 00 00 00 |.j\.O.b.........| 000080a0 00 00 00 00 00 00 00 00 c3 c9 01 00 ed 81 00 00 |................| 000080b0 01 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| 000080c0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| * [cut] * 0000f000 4a 46 53 31 01 00 00 00 48 63 0e 00 00 00 00 00 |JFS1....Hc......| 0000f010 00 10 00 00 0c 00 03 00 00 02 00 00 09 00 00 00 |................| 0000f020 00 20 00 00 00 09 20 10 00 00 00 00 00 00 00 00 |. .... .........| 0000f030 04 00 00 00 26 00 00 00 02 00 00 00 24 00 00 00 |....&.......$...| 0000f040 41 03 00 00 16 00 00 00 00 02 00 00 a0 cc 01 00 |A...............| 0000f050 37 00 00 00 69 cc 01 00 b6 d8 ac 4b 00 00 00 00 |7...i......K....| 0000f060 32 00 00 00 02 00 00 00 00 00 00 00 00 00 00 00 |2...............| 0000f070 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| 0000f080 00 00 00 00 00 00 00 00 90 15 e5 5f e3 c4 45 fa |..........._..E.| 0000f090 9d 6a 5c b5 4f da 62 1a 00 00 00 00 00 00 00 00 |.j\.O.b.........| 0000f0a0 00 00 00 00 00 00 00 00 c3 c9 01 00 ed 81 00 00 |................| 0000f0b0 01 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| 0000f0c0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| * 00010000 I'm total newbie to filesystems. I googled and found that JFS superblock may starts on 0x8000 offset. But what next? How to mount this card? If there would be normal partition table I would expect 55 AA on 510th and 511th byte, but first 8000 bytes are clear. Any help would be greatly appreciated. And sorry for my bad english :) Kind regards.

    Read the article

  • MySQL crash. Unknown cause. Signal 11

    - by fortmac
    This is a database that I installed ~6 months ago and had been running fine. This is currently running in Ubuntu 12.04. Attempting to connect to MySQL causes this error: ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (111) Then theres: $ sudo mysqld which returns: 130702 15:38:54 [Note] Plugin 'FEDERATED' is disabled. 130702 15:38:54 InnoDB: The InnoDB memory heap is disabled 130702 15:38:54 InnoDB: Mutexes and rw_locks use GCC atomic builtins 130702 15:38:54 InnoDB: Compressed tables use zlib 1.2.3.4 130702 15:38:54 InnoDB: Initializing buffer pool, size = 128.0M 130702 15:38:54 InnoDB: Completed initialization of buffer pool 130702 15:38:54 InnoDB: highest supported file format is Barracuda. InnoDB: The log sequence number in ibdata files does not match InnoDB: the log sequence number in the ib_logfiles! 130702 15:38:54 InnoDB: Database was not shut down normally! InnoDB: Starting crash recovery. InnoDB: Reading tablespace information from the .ibd files... InnoDB: Restoring possible half-written data pages from the doublewrite InnoDB: buffer... 130702 15:38:55 InnoDB: Waiting for the background threads to start 130702 15:38:56 InnoDB: 1.1.8 started; log sequence number 5201901917 130702 15:38:56 [Note] Server hostname (bind-address): '127.0.0.1'; port: 3306 130702 15:38:56 [Note] - '127.0.0.1' resolves to '127.0.0.1'; 130702 15:38:56 [Note] Server socket created on IP: '127.0.0.1'. 130702 15:38:56 [Note] Event Scheduler: Loaded 0 events 130702 15:38:56 [Note] mysqld: ready for connections. Version: '5.5.28-0ubuntu0.12.04.3' socket: '/var/run/mysqld/mysqld.sock' port: 3306 (Ubuntu) 19:39:02 UTC - mysqld got signal 11 ; This could be because you hit a bug. It is also possible that this binary or one of the libraries it was linked against is corrupt, improperly built, or misconfigured. This error can also be caused by malfunctioning hardware. We will try our best to scrape up some info that will hopefully help diagnose the problem, but since we have already crashed, something is definitely wrong and this may fail. key_buffer_size=16777216 read_buffer_size=131072 max_used_connections=1 max_threads=151 thread_count=1 connection_count=1 It is possible that mysqld could use up to key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 346681 K bytes of memory Hope that's ok; if not, decrease some variables in the equation. Thread pointer: 0x7f9509e51530 Attempting backtrace. You can use the following information to find out where mysqld died. If you see no messages after this, something went terribly wrong... stack_bottom = 7f94f1d3de60 thread_stack 0x30000 mysqld(my_print_stacktrace+0x29)[0x7f95083427b9] mysqld(handle_fatal_signal+0x483)[0x7f9508209b43] /lib/x86_64-linux-gnu/libpthread.so.0(+0xfcb0)[0x7f9506f5bcb0] mysqld(+0x320e1c)[0x7f9508113e1c] mysqld(_ZN4JOIN15alloc_func_listEv+0x9c)[0x7f950812391c] mysqld(_ZN4JOIN7prepareEPPP4ItemP10TABLE_LISTjS1_jP8st_orderS7_S1_S7_P13st_select_lexP18st_select_lex_unit+0x918)[0x7f9508124658] mysqld(_Z12mysql_selectP3THDPPP4ItemP10TABLE_LISTjR4ListIS1_ES2_jP8st_orderSB_S2_SB_yP13select_resultP18st_select_lex_unitP13st_select_lex+0x130)[0x7f950812d060] mysqld(_Z13handle_selectP3THDP3LEXP13select_resultm+0x17c)[0x7f9508132fbc] mysqld(+0x2f6714)[0x7f95080e9714] mysqld(_Z21mysql_execute_commandP3THD+0x16d8)[0x7f95080f1178] mysqld(_Z11mysql_parseP3THDPcjP12Parser_state+0x10f)[0x7f95080f5e0f] mysqld(_Z16dispatch_command19enum_server_commandP3THDPcj+0x1380)[0x7f95080f7260] mysqld(_Z24do_handle_one_connectionP3THD+0x1bd)[0x7f950819b80d] mysqld(handle_one_connection+0x50)[0x7f950819b870] /lib/x86_64-linux-gnu/libpthread.so.0(+0x7e9a)[0x7f9506f53e9a] /lib/x86_64-linux-gnu/libc.so.6(clone+0x6d)[0x7f9506684cbd] Trying to get some variables. Some pointers may be invalid and cause the dump to abort. Query (7f94e0004b80): is an invalid pointer Connection ID (thread ID): 1 Status: NOT_KILLED The manual page at http://dev.mysql.com/doc/mysql/en/crashing.html contains information that should help you find out what is causing the crash. I'm at a loss. What other reports would be useful in diagnosing this? /var/log/mysql.err & /var/log/mysql.log are empty.

    Read the article

  • WiX 3 Tutorial: Generating file/directory fragments with Heat.exe

    - by Mladen Prajdic
    In previous posts I’ve shown you our SuperForm test application solution structure and how the main wxs and wxi include file look like. In this post I’ll show you how to automate inclusion of files to install into your build process. For our SuperForm application we have a single exe to install. But in the real world we have 10s or 100s of different files from dll’s to resource files like pictures. It all depends on what kind of application you’re building. Writing a directory structure for so many files by hand is out of the question. What we need is an automated way to create this structure. Enter Heat.exe. Heat is a command line utility to harvest a file, directory, Visual Studio project, IIS website or performance counters. You might ask what harvesting means? Harvesting is converting a source (file, directory, …) into a component structure saved in a WiX fragment (a wxs) file. There are 2 options you can use: Create a static wxs fragment with Heat and include it in your project. The pro of this is that you can add or remove components by hand. The con is that you have to do the pro part by hand. Automation always beats manual labor. Run heat command line utility in a pre-build event of your WiX project. I prefer this way. By always recreating the whole fragment you don’t have to worry about missing any new files you add. The con of this is that you’ll include files that you otherwise might not want to. There is no perfect solution so pick one and deal with it. I prefer using the second way. A neat way of overcoming the con of the second option is to have a post-build event on your main application project (SuperForm.MainApp in our case) to copy the files needed to be installed in a special location and have the Heat.exe read them from there. I haven’t set this up for this tutorial and I’m simply including all files from the default SuperForm.MainApp \bin directory. Remember how we created a System Environment variable called SuperFormFilesDir? This is where we’ll use it for the first time. The command line text that you have to put into the pre-build event of your WiX project looks like this: "$(WIX)bin\heat.exe" dir "$(SuperFormFilesDir)" -cg SuperFormFiles -gg -scom -sreg -sfrag -srd -dr INSTALLLOCATION -var env.SuperFormFilesDir -out "$(ProjectDir)Fragments\FilesFragment.wxs" After you install WiX you’ll get the WIX environment variable. In the pre/post-build events environment variables are referenced like this: $(WIX). By using this you don’t have to think about the installation path of the WiX. Remember: for 32 bit applications Program files folder is named differently between 32 and 64 bit systems. $(ProjectDir) is obviously the path to your project and is a Visual Studio built in variable. You can view all Heat.exe options by running it without parameters but I’ll explain some that stick out the most. dir "$(SuperFormFilesDir)": tell Heat to harvest the whole directory at the set location. That is the location we’ve set in our System Environment variable. –cg SuperFormFiles: the name of the Component group that will be created. This name is included in out Feature tag as is seen in the previous post. -dr INSTALLLOCATION: the directory reference this fragment will fall under. You can see the top level directory structure in the previous post. -var env.SuperFormFilesDir: the name of the variable that will replace the SourceDir text that would otherwise appear in the fragment file. -out "$(ProjectDir)Fragments\FilesFragment.wxs": the full path and name under which the fragment file will be saved. If you have source control you have to include the FilesFragment.wxs into your project but remove its source control binding. The auto generated FilesFragment.wxs for our test app looks like this: <?xml version="1.0" encoding="utf-8"?><Wix xmlns="http://schemas.microsoft.com/wix/2006/wi"> <Fragment> <ComponentGroup Id="SuperFormFiles"> <ComponentRef Id="cmp5BB40DB822CAA7C5295227894A07502E" /> <ComponentRef Id="cmpCFD331F5E0E471FC42A1334A1098E144" /> <ComponentRef Id="cmp4614DD03D8974B7C1FC39E7B82F19574" /> <ComponentRef Id="cmpDF166522884E2454382277128BD866EC" /> </ComponentGroup> </Fragment> <Fragment> <DirectoryRef Id="INSTALLLOCATION"> <Component Id="cmp5BB40DB822CAA7C5295227894A07502E" Guid="{117E3352-2F0C-4E19-AD96-03D354751B8D}"> <File Id="filDCA561ABF8964292B6BC0D0726E8EFAD" KeyPath="yes" Source="$(env.SuperFormFilesDir)\SuperForm.MainApp.exe" /> </Component> <Component Id="cmpCFD331F5E0E471FC42A1334A1098E144" Guid="{369A2347-97DD-45CA-A4D1-62BB706EA329}"> <File Id="filA9BE65B2AB60F3CE41105364EDE33D27" KeyPath="yes" Source="$(env.SuperFormFilesDir)\SuperForm.MainApp.pdb" /> </Component> <Component Id="cmp4614DD03D8974B7C1FC39E7B82F19574" Guid="{3443EBE2-168F-4380-BC41-26D71A0DB1C7}"> <File Id="fil5102E75B91F3DAFA6F70DA57F4C126ED" KeyPath="yes" Source="$(env.SuperFormFilesDir)\SuperForm.MainApp.vshost.exe" /> </Component> <Component Id="cmpDF166522884E2454382277128BD866EC" Guid="{0C0F3D18-56EB-41FE-B0BD-FD2C131572DB}"> <File Id="filF7CA5083B4997E1DEC435554423E675C" KeyPath="yes" Source="$(env.SuperFormFilesDir)\SuperForm.MainApp.vshost.exe.manifest" /> </Component> </DirectoryRef> </Fragment></Wix> The $(env.SuperFormFilesDir) will be replaced at build time with the directory where the files to be installed are located. There is nothing too complicated about this. In the end it turns out that this sort of automation is great! There are a few other ways that Heat.exe can compose the wxs file but this is the one I prefer. It just seems the clearest. Play with its options to see what can it do. It’s one awesome little tool.   WiX 3 tutorial by Mladen Prajdic navigation WiX 3 Tutorial: Solution/Project structure and Dev resources WiX 3 Tutorial: Understanding main wxs and wxi file WiX 3 Tutorial: Generating file/directory fragments with Heat.exe

    Read the article

  • ActiveX component can't create Object Error? Check 64 bit Status

    - by Rick Strahl
    If you're running on IIS 7 and a 64 bit operating system you might run into the following error using ASP classic or ASP.NET with COM interop. In classic ASP applications the error will show up as: ActiveX component can't create object   (Error 429) (actually without error handling the error just shows up as 500 error page) In my case the code that's been giving me problems has been a FoxPro COM object I'd been using to serve banner ads to some of my pages. The code basically looks up banners from a database table and displays them at random. The ASP classic code that uses it looks like this: <% Set banner = Server.CreateObject("wwBanner.aspBanner") banner.BannerFile = "wwsitebanners" Response.Write(banner.GetBanner(-1)) %> Originally this code had no specific error checking as above so the ASP pages just failed with 500 error pages from the Web server. To find out what the problem is this code is more useful at least for debugging: <% ON ERROR RESUME NEXT Set banner = Server.CreateObject("wwBanner.aspBanner") Response.Write(err.Number & " - " & err.Description) banner.BannerFile = "wwsitebanners" Response.Write(banner.GetBanner(-1)) %> which results in: 429 - ActiveX component can't create object which at least gives you a slight clue. In ASP.NET invoking the same COM object with code like this: <% dynamic banner = wwUtils.CreateComInstance("wwBanner.aspBanner") as dynamic; banner.cBANNERFILE = "wwsitebanners"; Response.Write(banner.getBanner(-1)); %> results in: Retrieving the COM class factory for component with CLSID {B5DCBB81-D5F5-11D2-B85E-00600889F23B} failed due to the following error: 80040154 Class not registered (Exception from HRESULT: 0x80040154 (REGDB_E_CLASSNOTREG)). The class is in fact registered though and the COM server loads fine from a command prompt or other COM client. This error can be caused by a COM server that doesn't load. It looks like a COM registration error. There are a number of traditional reasons why this error can crop up of course. The server isn't registered (run regserver32 to register a DLL server or /regserver on an EXE server) Access permissions aren't set on the COM server (Web account has to be able to read the DLL ie. Network service) The COM server fails to load during initialization ie. failing during startup One thing I always do to check for COM errors fire up the server in a COM client outside of IIS and ensure that it works there first - it's almost always easier to debug a server outside of the Web environment. In my case I tried the server in Visual FoxPro on the server with: loBanners = CREATEOBJECT("wwBanner.aspBanner") loBanners.cBannerFile = "wwsitebanners" ? loBanners.GetBanner(-1) and it worked just fine. If you don't have a full dev environment on the server you can also use VBScript do the same thing and run the .vbs file from the command prompt: Set banner = Server.CreateObject("wwBanner.aspBanner") banner.BannerFile = "wwsitebanners" MsgBox(banner.getBanner(-1)) Since this both works it tells me the server is registered and working properly. This leaves startup failures or permissions as the problem. I double checked permissions for the Application Pool and the permissions of the folder where the DLL lives and both are properly set to allow access by the Application Pool impersonated user. Just to be sure I assigned an Admin user to the Application Pool but still no go. So now what? 64 bit Servers Ahoy A couple of weeks back I had set up a few of my Application pools to 64 bit mode. My server is Server 2008 64 bit and by default Application Pools run 64 bit. Originally when I installed the server I set up most of my Application Pools to 32 bit mainly for backwards compatibility. But as more of my code migrates to 64 bit OS's I figured it'd be a good idea to see how well code runs under 64 bit code. The transition has been mostly painless. Until today when I noticed the problem with the code above when scrolling to my IIS logs and noticing a lot of 500 errors on many of my ASP classic pages. The code in question in most of these pages deals with this single simple COM object. It took a while to figure out that the problem is caused by the Application Pool running in 64 bit mode. The issue is that 32 bit COM objects (ie. my old Visual FoxPro COM component) cannot be loaded in a 64 bit Application Pool. The ASP pages using this COM component broke on the day I switched my main Application Pool into 64 bit mode but I didn't find the problem until I searched my logs for errors by pure chance. To fix this is easy enough once you know what the problem is by switching the Application Pool to Enable 32-bit Applications: Once this is done the COM objects started working correctly again. 64 bit ASP and ASP.NET with DCOM Servers This is kind of off topic, but incidentally it's possible to load 32 bit DCOM (out of process) servers from ASP.NET and ASP classic even if those applications run in 64 bit application pools. In fact, in West Wind Web Connection I use this capability to run a 64 bit ASP.NET handler that talks to a 32 bit FoxPro COM server which allows West Wind Web Connection to run in native 64 bit mode without custom configuration (which is actually quite useful). It's probably not a common usage scenario but it's good to know that you can actually access 32 bit COM objects this way from ASP.NET. For West Wind Web Connection this works out well as the DCOM interface only makes one non-chatty call to the backend server that handles all the rest of the request processing. Application Pool Isolation is your Friend For me the recent incident of failure in the classic ASP pages has just been another reminder to be very careful with moving applications to 64 bit operation. There are many little traps when switching to 64 bit that are very difficult to track and test for. I described one issue I had a couple of months ago where one of the default ASP.NET filters was loading the wrong version (32bit instead of 64bit) which was extremely difficult to track down and was caused by a very sneaky configuration switch error (basically 3 different entries for the same ISAPI filter all with different bitness settings). It took me almost a full day to track this down). Recently I've been taken to isolate individual applications into separate Application Pools rather than my past practice of combining many apps into shared AppPools. This is a good practice assuming you have enough memory to make this work. Application Pool isolate provides more modularity and allows me to selectively move applications to 64 bit. The error above came about precisely because I moved one of my most populous app pools to 64 bit and forgot about the minimal COM object use in some of my old pages. It's easy to forget. To 64bit or Not Is it worth it to move to 64 bit? Currently I'd say -not really. In my - admittedly limited - testing I don't see any significant performance increases. In fact 64 bit apps just seem to consume considerably more memory (30-50% more in my pools on average) and performance is minimally improved (less than 5% at the very best) in the load testing I've performed on a couple of sites in both modes. The only real incentive for 64 bit would be applications that require huge data spaces that exceed the 32 bit 4 gigabyte memory limit. However I have a hard time imagining an application that needs 4 gigs of memory in a single Application Pool :-). Curious to hear other opinions on benefits of 64 bit operation. © Rick Strahl, West Wind Technologies, 2005-2011Posted in COM   ASP.NET  FoxPro  

    Read the article

  • Ajax Control Toolkit Now Supports jQuery

    - by Stephen.Walther
    I’m excited to announce the September 2013 release of the Ajax Control Toolkit, which now supports building new Ajax Control Toolkit controls with jQuery. You can download the latest release of the Ajax Control Toolkit from http://AjaxControlToolkit.CodePlex.com or you can install the Ajax Control Toolkit directly within Visual Studio by executing the following NuGet command: The New jQuery Extender Base Class This release of the Ajax Control Toolkit introduces a new jQueryExtender base class. This new base class enables you to create Ajax Control Toolkit controls with jQuery instead of the Microsoft Ajax Library. Currently, only one control in the Ajax Control Toolkit has been rewritten to use the new jQueryExtender base class (only one control has been jQueryized). The ToggleButton control is the first of the Ajax Control Toolkit controls to undergo this dramatic transformation. All of the other controls in the Ajax Control Toolkit are written using the Microsoft Ajax Library. We hope to gradually rewrite these controls as jQuery controls over time. You can view the new jQuery ToggleButton live at the Ajax Control Toolkit sample site: http://www.asp.net/ajaxLibrary/AjaxControlToolkitSampleSite/ToggleButton/ToggleButton.aspx Why are we rewriting Ajax Control Toolkits with jQuery? There are very few developers actively working with the Microsoft Ajax Library while there are thousands of developers actively working with jQuery. Because we want talented developers in the community to continue to contribute to the Ajax Control Toolkit, and because almost all JavaScript developers are familiar with jQuery, it makes sense to support jQuery with the Ajax Control Toolkit. Also, we believe that the Ajax Control Toolkit is a great framework for Web Forms developers who want to build new ASP.NET controls that use JavaScript. The Ajax Control Toolkit has great features such as automatic bundling, minification, caching, and compression. We want to make it easy for ASP.NET developers to build new controls that take advantage of these features. Instantiating Controls with data-* Attributes We took advantage of the new JQueryExtender base class to change the way that Ajax Control Toolkit controls are instantiated. In the past, adding an Ajax Control Toolkit to a page resulted in inline JavaScript being injected into the page. For example, adding the ToggleButton control to a page injected the following HTML and script: <input id="ctl00_SampleContent_CheckBox1" name="ctl00$SampleContent$CheckBox1" type="checkbox" checked="checked" /> <script type="text/javascript"> //<![CDATA[ Sys.Application.add_init(function() { $create(Sys.Extended.UI.ToggleButtonBehavior, {"CheckedImageAlternateText":"Check", "CheckedImageUrl":"ToggleButton_Checked.gif", "ImageHeight":19, "ImageWidth":19, "UncheckedImageAlternateText":"UnCheck", "UncheckedImageUrl":"ToggleButton_Unchecked.gif", "id":"ctl00_SampleContent_ToggleButtonExtender1"}, null, null, $get("ctl00_SampleContent_CheckBox1")); }); //]]> </script> Notice the call to the JavaScript $create() method at the bottom of the page. When using the Microsoft Ajax Library, this call to the $create() method is necessary to create the Ajax Control Toolkit control. This inline script looks pretty ugly to a modern JavaScript developer. Inline script! Horrible! The jQuery version of the ToggleButton injects the following HTML and script into the page: <input id="ctl00_SampleContent_CheckBox1" name="ctl00$SampleContent$CheckBox1" type="checkbox" checked="checked" data-act-togglebuttonextender="imageWidth:19, imageHeight:19, uncheckedImageUrl:'ToggleButton_Unchecked.gif', checkedImageUrl:'ToggleButton_Checked.gif', uncheckedImageAlternateText:'I don&#39;t understand why you don&#39;t like ASP.NET', checkedImageAlternateText:'It&#39;s really nice to hear from you that you like ASP.NET'" /> Notice that there is no script! There is no call to the $create() method. In fact, there is no inline JavaScript at all. The jQuery version of the ToggleButton uses an HTML5 data-* attribute instead of an inline script. The ToggleButton control is instantiated with a data-act-togglebuttonextender attribute. Using data-* attributes results in much cleaner markup (You don’t need to feel embarrassed when selecting View Source in your browser). Ajax Control Toolkit versus jQuery So in a jQuery world why is the Ajax Control Toolkit needed at all? Why not just use jQuery plugins instead of the Ajax Control Toolkit? For example, there are lots of jQuery ToggleButton plugins floating around the Internet. Why not just use one of these jQuery plugins instead of using the Ajax Control Toolkit ToggleButton control? There are three main reasons why the Ajax Control Toolkit continues to be valuable in a jQuery world: Ajax Control Toolkit controls run on both the server and client jQuery plugins are client only. A jQuery plugin does not include any server-side code. If you need to perform any work on the server – think of the AjaxFileUpload control – then you can’t use a pure jQuery solution. Ajax Control Toolkit controls provide a better Visual Studio experience You don’t get any design time experience when you use jQuery plugins within Visual Studio. Ajax Control Toolkit controls, on the other hand, are designed to work with Visual Studio. For example, you can use the Visual Studio Properties window to set Ajax Control Toolkit control properties. Ajax Control Toolkit controls shield you from working with JavaScript I like writing code in JavaScript. However, not all developers like JavaScript and some developers want to completely avoid writing any JavaScript code at all. The Ajax Control Toolkit enables you to take advantage of JavaScript (and the latest features of HTML5) in your ASP.NET Web Forms websites without writing a single line of JavaScript. Better ToolkitScriptManager Documentation With this release, we have added more detailed documentation for using the ToolkitScriptManager. In particular, we added documentation that describes how to take advantage of the new bundling, minification, compression, and caching features of the Ajax Control Toolkit. The ToolkitScriptManager documentation is part of the Ajax Control Toolkit sample site and it can be read here: http://www.asp.net/ajaxLibrary/AjaxControlToolkitSampleSite/ToolkitScriptManager/ToolkitScriptManager.aspx Other Fixes This release of the Ajax Control Toolkit includes several important bug fixes. For example, the Ajax Control Toolkit Twitter control was completely rewritten with this release. Twitter is in the process of retiring the first version of their API. You can read about their plans here: https://dev.twitter.com/blog/planning-for-api-v1-retirement We completely rewrote the Ajax Control Toolkit Twitter control to use the new Twitter API. To take advantage of the new Twitter API, you must get a key and access token from Twitter and add the key and token to your web.config file. Detailed instructions for using the new version of the Ajax Control Toolkit Twitter control can be found here: http://www.asp.net/ajaxLibrary/AjaxControlToolkitSampleSite/Twitter/Twitter.aspx   Summary We’ve made some really great changes to the Ajax Control Toolkit over the last two releases to modernize the toolkit. In the previous release, we updated the Ajax Control Toolkit to use a better bundling, minification, compression, and caching system. With this release, we updated the Ajax Control Toolkit to support jQuery. We also continue to update the Ajax Control Toolkit with important bug fixes. I hope you like these changes and I look forward to hearing your feedback.

    Read the article

  • Tips on installing Visual Studio 2010 SP1

    - by Jon Galloway
    Visual Studio SP1 went up on MSDN downloads (here) on March 8, and will be released publicly on March 10 here. Release announcements: Soma: Visual Studio 2010 enhancements Jason Zander: Announcing Visual Studio 2010 Service Pack 1 I started on this post with tips on installing VS2010 SP1 when I realized I’ve been writing these up for Visual Studio and .NET framework SP releases for a while (e.g. VS2008 / .NET 3.5 SP1 post, VS2005 SP1 post). Looking back the years of Visual Studio SP installs (and remembering when we’d get up to SP6 for a Visual Studio release), I’m happy to see that it just keeps getting easier. Service Packs are a lot less finicky about requiring beta software to be uninstalled, install more quickly, and are just generally a lot less scary. If I can’t have a jetpack, at least my future provided me faster, easier service packs. Disclaimer: These tips are just general things I've picked up over the years. I don't have any inside knowledge here. If you see anything wrong, be sure to let me know in the comments. You may want to check the readme file before installing - it's short, and it's in that new-fangled HTML format. On with the tips! Before starting, uninstall Visual Studio features you don't use Visual Studio service packs (and other Microsoft service packs as well) install patches for the specific features you’ve got installed. This is a big reason to always do a custom install when you first install Visual Studio, but it’s not difficult to update your existing installation. Here’s the quick way to do that: Tap the windows key and type “add or remove programs” and press enter (or click on the “Add or remove programs” link if you must).   Type “Visual Studio 2010” in the search box in the upper right corner, click on the Visual Studio program (the one with the VS infinity looking logo) and click on Uninstall/Change. Click on Add or Remove Features The next part’s up to you – what features do you actually use? I’ve been doing primarily ASP.NET MVC development in C# lately, so I selected Visual C# and Visual Web Developer. Remember that you can install features later if needed, and can also install the express versions if you want. Selecting everything just because it’s there - or you paid for it – means that you install updates for everything, every time. When you’ve made your changes, click on the Update button to uninstall unused features. Shut down all instances of Visual Studio It probably goes without saying that you should close a program down before installing it, partly to avoid the file-in-use-reboot-after-install horror. Additional "hunch / works on my machine" quality tip: On one computer I saw a note in the setup log about Visual Studio a prompt for user input to close Visual Studio, although I never saw the prompt. Just to  be sure, I'd personally open up Task Manager and kill any devenv.exe processes I saw running, as it couldn't hurt. Use the web installer I use the Web Installers whenever possible. There’s no point in downloading the DVD unless you’re doing multiple installs or won’t have internet access. The DVD IS is 1.5GB, since it needs to be able to service every possible supported installation option on both x86 and x64. The web installer is 776 KB (smaller than calc.exe), so you can start the installation right away. Like other web installers, the real benefit is that it only installs the updates you need (hence the reason for step 1 – uninstalling unused components). Instead of 1.5GB, my download was roughly 530MB. If you’re installing from MSDN (this link takes you right to the Visual Studio installs), select the first one on the list: The first step in the installation process is to analyze the machine configuration and tell you what needs to be installed. Since I've trimmed down my features, that's a pretty short list. The time's not far off where I may not install SQL Server on my dev machines, just using SQL Server Compact - that would shorten the list further. When I hit next, you can see that the download size has shrunk considerably. When I start the install, note that the installation begins while other components are downloading - another benefit of the web install. On my mid-range desktop machine, the install took 25 minutes. What if it takes longer? According to Heath Stewart (Visual Studio installer guru), average SP1 installs take roughly 45 minutes. An installation which takes hours to complete may be a sign of a problem: see his post Visual Studio 2010 Service Pack 1 installing for over 2 hours could be a sign of a problem. Why so long? Yes, even 25 minutes is a while. Heath's got another blog post explaining why the update can take longer than the initial install (see: A patch may take as long or longer to install than the target product) which explains all the additional steps and complexities a patch needs to deal with, as well as some mitigation steps that deployment authors can take to mitigate the impact. Other things to know about Visual Studio 2010 SP1 Installs over Visual Studio 2010 SP1 Beta That's nice. Previous Visual Studio versions did a number of annoying things when you installed SP's over beta's - fail with weird errors, get part way through and tell you needed to cancel and uninstall first, etc. I've installed this on two machines that had random beta stuff installed without tears. That Readme file you didn't read I mentioned the readme file earlier (http://go.microsoft.com/fwlink/?LinkId=210711 ). Some interesting things I picked up in there: 2.1.3. Visual Studio 2010 Service Pack 1 installation may fail when a USB drive or other removeable drive is connected 2.1.4. Visual Studio must be restarted after Visual Studio 2010 SP1 tooling for SQL Server Compact (Compact) 4.0 is installed 2.2.1. If Visual Studio 2010 Service Pack 1 is uninstalled, Visual Studio 2010 must be reinstalled to restore certain components 2.2.2. If Visual Studio 2010 Service Pack 1 is uninstalled, Visual Studio 2010 must be reinstalled before SP1 can be installed again 2.4.3.1. Async CTP If you installed the pre-SP1 version of Async CTP but did not uninstall it before you installed Visual Studio 2010 SP1, then your computer will be in a state in which the version of the C# compiler in the .NET Framework does not match the C# compiler in Visual Studio. To resolve this issue: After you install Visual Studio 2010 SP1, reinstall the SP1 version of the Async CTP from here. Hardware acceleration for Visual Studio is disabled on Windows XP Visual Studio 2010 SP1 disables hardware acceleration when running on Windows XP (only on XP). You can turn it back on in the Visual Studio options, under Environment / General, as shown below. See Jason Zander's post titled Performance Troubleshooting Article and VS2010 SP1 Change.

    Read the article

  • Why can't I reinstall MySQL?

    - by Johannes Nielsen
    I've been looking all around the Internet for an answer but didn't find anything. I hope you can help me now. I have a server with MySQL. From one day to another, MySQL didn't let me enter with my root password anymore (accsess denied for user 'root'@'localhost' using password: 'YES'). So I tried two ways to reset the password: No.1: I typed: shell> /etc/init.d/mysqld stop To stop MySQL. Then I restarted it skipping the grant-tables: shell> mysqld_safe --skip-grant-tables So I was able to log in as root and change the password using: mysql> UPDATE mysql.user SET Password = PASSWORD('MyNewPassword') WHERE User = 'root'; FLUSH PRIVILEGES; I restarted MySQL and tried to log in as root with my new password - didn't work. So I tried the solution that's described here: http://dev.mysql.com/doc/refman/5.0/en/resetting-permissions.html (I don't want to post it here because this post is already pretty long). Didn't work either. Actually it made it worse, because since that day, every time I try to start MySQL, it doesn't even ask me for my password, but I get: shell> ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (111) Well, I've looked up what it means and found that my mysqld.sock is missing. I tried to create it using touch but MySQL can't start with that socket. Now I'm trying to reinstall MySQL but everytime I type in shell> apt-get --purge remove mysql-server mysql-common mysql-client In that or any other order or every one of those three alone, I get: shell> Reading package lists... Done shell> Building dependency tree shell> Reading state information... Done shell> Package mysql-client is not installed, so not removed shell> Package mysql-server is not installed, so not removed shell> You might want to run 'apt-get -f install' to correct these: shell> The following packages have unmet dependencies: shell> libmysqlclient18 : Depends: mysql-common (>= 5.5.28-0ubuntu0.12.04.2) but it is not going to be installed shell> libmysqlclient18:i386 : Depends: mysql-common:i386 (>= 5.5.28-0ubuntu0.12.04.2) shell> mysql-client-5.5 : Depends: mysql-common (>= 5.5.28-0ubuntu0.12.04.2) but it is not going to be installed shell> mysql-server-5.5 : PreDepends: mysql-common (>= 5.5.28-0ubuntu0.12.04.2) but it is not going to be installed shell> psa-firewall : Depends: plesk-core (>= 11.0.9) but it is not installable shell> Depends: mysql-server but it is not going to be installed shell> psa-spamassassin : Depends: plesk-core (>= 11.0.9) but it is not installable shell> psa-vpn : Depends: plesk-core (>= 11.0.9) but it is not installable shell> Depends: plesk-base (>= 11.0.9) but it is not installable shell> Depends: mysql-server but it is not going to be installed shell> E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). So I said to my self "let's just remove those files with depenencies, too" (that psa-stuff since plesk is virtual and can't be uninstalled)... Guess what happened: shell> Reading package lists... Done shell> Building dependency tree shell> Reading state information... Done shell> Package mysql-client is not installed, so not removed shell> Package mysql-server is not installed, so not removed shell> You might want to run 'apt-get -f install' to correct these: shell> The following packages have unmet dependencies: shell> libmysqlclient18 : Depends: mysql-common (>= 5.5.28-0ubuntu0.12.04.2) but it is not going to be installed shell> libmysqlclient18:i386 : Depends: mysql-common:i386 (>= 5.5.28-0ubuntu0.12.04.2) shell> mysql-client-5.5 : Depends: mysql-common (>= 5.5.28-0ubuntu0.12.04.2) but it is not going to be installed shell> mysql-server-5.5 : PreDepends: mysql-common (>= 5.5.28-0ubuntu0.12.04.2) but it is not going to be installed shell> E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). Of course I tried apt-get -f install, too many times even. What am I doing wrong? No matter, which other packages I include into apt-get --purge remove, I always get new dependencies. Do I have to delete every MySQL-related directory and file manually? Hope there's someone out there who can help me! Cheers! EDIT: After trying apt-get purge mysql-server mysql-common mysql-client libmysqlclient18 libmysqlclient18:i386 mysql-client-5.5 mysql-server-5.5 psa-firewall psa-spamassassin psa-vpn Reading package lists... Done Building dependency tree Reading state information... Done Package mysql-client is not installed, so not removed Package mysql-server is not installed, so not removed You might want to run 'apt-get -f install' to correct these: The following packages have unmet dependencies: libdbd-mysql-perl : Depends: libmysqlclient18 (>= 5.5.13-1) but it is not going to be installed libmyodbc : Depends: libmysqlclient18 (>= 5.5.13-1) but it is not going to be installed libqt4-sql-mysql:i386 : Depends: libmysqlclient18:i386 (>= 5.5.13-1) but it is not going to be installed php5-mysql : Depends: libmysqlclient18 (>= 5.5.13-1) but it is not going to be installed ruby-mysql : Depends: libmysqlclient18 (>= 5.5.13-1) but it is not going to be installed E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). So I tried to remove all these and got: Building dependency tree Reading state information... Done Package mysql-client is not installed, so not removed Package mysql-server is not installed, so not removed You might want to run 'apt-get -f install' to correct these:qlclient18:i386 mysql The following packages have unmet dependencies: libmysql-ruby1.8 : Depends: ruby-mysql but it is not going to be installed E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). And actually I think removing that file, too solved my problem :-S Next time I'll try everything before asking :D Thank you Eric for keeping me couraged to just go on removing :D

    Read the article

  • Top Tweets SOA Partner Community – March 2012

    - by JuergenKress
    Send your tweets @soacommunity #soacommunity and follow us at http://twitter.com/soacommunity SOA Community ?SOA Community Newsletter February 2012 wp.me/p10C8u-o0 Marc ?Reading through the #OFM 11.1.1.6 , patchset 5 documentation. What is the best way to upgrade your whole dev…prd street. SOA Community Thanks for the successful and super interesting #sbidays ! Wonderful discussions around the Integration, case management and security tracks Torsten Winterberg Schon den neuen Opitz Technology-Blog gebookmarked? The Cattle Crew bit.ly/yLPwBD wird ab sofort regelmäßig Erkenntnisse posten. OTNArchBeat ? Unit Testing Asynchronous BPEL Processes Using soapUI | @DanielAmadei bit.ly/x9NsS9 Rolando Carrasco ?Video de Human Task en BPM 11g. Por @edwardo040. bit.ly/wki9CA cc @OracleBPM @OracleSOA @soacommunity View video Marcel Mertin SOA Security Hands-On by Dirk Krafzig and Mamoon Yunus at #sbidays is also great! SOA Community Workshop day #sbidays #BPMN2.0 by Volker Stiehl from #SAP great training – now I can model & execute in #bpmsuite #soacommunity Simone Geib ?Just updated our advanced #soasuite #otn page with a number of very interesting @orclateamsoa blog posts: bit.ly/advancedsoasui… OTNArchBeat ? Start Small, Grow Fast: SOA Best Practices article by @biemond, @rluttikhuizen, @demed bit.ly/yem9Zv Steffen Miller ? Nice new features in SOA Suite Business Rules #PS5 Testing rules with scenarios and output validation bit.ly/zj64Q3 @SOACOMMUNITY OTNArchBeat ? Reply SOA Blackbelt training by David Shaffer, April 30th–May 4th 2012 bit.ly/xGdC24 OTNArchBeat ? What have BPM, big data, social tools, and business models got in common? | Andy Mulholland bit.ly/xUkOGf SOA Community ? Live hacking at #sbidays – cheaper shopping, bias cracking, payment systems, secure your SOA! pic.twitter.com/y7YaIdug SOA Community Future #BPM & #ACM solutions can make use of ontology’s, based on #sqarql #sbidays pic.twitter.com/xLb1Z5zs Simone Geib ? @soacommunity: SOA Blackbelt training by David Shaffer, April 30th–May 4th 2012 wp.me/p10C8u-nX Biemond Changing your ADF Connections in Enterprise Manager with PS5: With Patch Set 5 of Fusion Middleware you can fina… bit.ly/zF7Rb1 Marc ? HUGE (!) CPU and Heap improvement on Oracle Fusion Middleware tinyurl.com/762drzp @wlscommunity @soacommunity #OSB #SOA #WLS SOA Community ?Networking @ SOA & BPM Partner Community blogs.oracle.com/soacommunity/e… #soacommunity #otn #opn #oracle SOA Community ?Published the SOA Partner Community newsletter February edition – READ it. Not yet a member? oracle.com/goto/emea/soa #soacommunity #otn #opn AMIS, Oracle & Java Blog by Lucas Jellema: "Book Review: Do More with SOA Integration: Best of Packt (december 2011, various authors)" bit.ly/wq633E Jon petter hjulstad @SOASimone Excellent summary! Lots of new features! Simone Geib ?Do you want to know what’s new in #soasuite #PS5? Go to bit.ly/xBX06f and let me know what you think SOA Community ? Unit Testing Asynchronous BPEL Processes Using soapUI oracle.com/technetwork/ar… #soacommunity #soa #otn #oracle #bpel Retweeted by SOA Community View media Retweeted by SOA Community Eric Elzinga ? Oracle Fusion Middleware Partner Community Forum Malage, The Overview, bit.ly/AA9BKd #ofmforum SOA&Cloud Symposium ? The February issue of the Service Technology Magazine is now published. servicetechmag.com SOA Community ? Oracle SOA Suite 11g Database Growth Management – must read! oracle.com/technetwork/da… #soacommunity #soa #purging demed ? Have you exposed internal processes to mobile devices using #oraclesoa? Interested in an article? DM me! #osb #rest #multichannel #mobile orclateamsoa ? A-Team SOA Blog: Enhanced version of Thread Dump Analyzer (TDA A-Team) ow.ly/1hpk7l SOA Community Reply BPM Suite #PS5 (11.1.1.6) available for download soacommunity.wordpress.com/2012/02/22/soa… Send us your feedback! #soacommunity #bpmsuite #opn SOA Community ? SOA Suite #PS5 (11.1.1.6) available for download soacommunity.wordpress.com/2012/02/22/soa… Send us your feedback! #soacommunity #soasuite SOA Community BPM Suite #PS5 1(1.1.1.6) available for download. List of new BPM features blogs.oracle.com/soacommunity/e… #soacommunity #bpm #bpmsuite #opn OracleBlogs BPM in Utilties Industry ow.ly/1hC3fp Retweeted by SOA Community OTNArchBeat ? Demystifying Oracle Enterprise Gateway | Naresh Persaud bit.ly/xtDNe2 OTNArchBeat ? Architect’s Guide to Big Data; Test BPEL Processes Using SoapUI; Development Debate bit.ly/xbDYSo Frank Nimphius ? Finished my book review of "Do More with SOA Integration: Best of Packt ". Here are my review comments: bit.ly/x2k9OZ Lucas Jellema ? That is my one stop-and-go download center for #PS5 : edelivery.oracle.com/EPD/Download/g… Lucas Jellema ? Interesting piece of documentation: Fusion Applications Extensibility Guide – docs.oracle.com/cd/E15586_01/f… source for design time @ run time inspira Lucas Jellema ? Strongly improved support for testing Business Rules at Design Time in #PS5 see docs.oracle.com/cd/E23943_01/u… Lucas Jellema ? SOA Suite 11gR1 PS5: new BPEL Component testing – docs.oracle.com/cd/E23943_01/d… Lucas Jellema ? PS5 available for CEP (Complex Event Processing) – a personal favorite of mine : oracle.com/technetwork/mi… Lucas Jellema ?What’s New in Fusion Developer’s Guide 11gR1 PS5: docs.oracle.com/cd/E23943_01/w… Lucas Jellema ? BPMN Correlation (FMW 11gR1 PS5): docs.oracle.com/cd/E23943_01/d… Lucas Jellema ? Modifying running BPM Process instances (FMW 11gR1 PS5): docs.oracle.com/cd/E23943_01/d… Lucas Jellema ? SOA Suite 11gR1 PS5 – new aggregation pattern: docs.oracle.com/cd/E23943_01/d… routing multiple messages to same instance Melvin van der Kuijl ? Automating Testing of SOA Composite Applications in PS5. docs.oracle.com/cd/E23943_01/d… Cato Aune ? SOA suite PS5 Enterprise Deployment Guide is available in ePub docs.oracle.com/cd/E23943_01/c… . Much better than pdf on Galaxy Note SOA Community ?JDeveloper 11.1.1.6 is available for download bit.ly/wGYrwE #soacommunity SOA Community ? Your first experience #PS5 – let us know @soacommunity – send us your tweets and blog posts! #soacommunity Jon petter hjulstad ? WLS 10.3.6 New features, ex better logging of jdbc use: docs.oracle.com/cd/E23943_01/w… Heidi Buelow ? Get it now! RT @soacommunity: BPM Suite PS5 11.1.1.6 available for download bit.ly/AgagT5 #bpm #soacommunity Jon petter hjulstad ?SOA Suite PS5 EDG contains OSB! docs.oracle.com/cd/E23943_01/c… Jon petter hjulstad ? Testing Oracle Rules from JDeveloper is easier in PS5: docs.oracle.com/cd/E23943_01/u… Biemond® ? What’s New in Oracle Service Bus 11.1.1.6.0 oracle.com/technetwork/mi… Jon petter hjulstad ? Adminguide New and Changed Features for PS5, ex GridLink data sources: docs.oracle.com/cd/E23943_01/c… Retweeted by SOA Community Andreas Koop ? Unbelievable! #OFM Doc Lib growth from 11gPS4->11gPS5 by 1.2G! View media SOA Community ?ODI PS5 is available oracle.com/technetwork/mi… #odi #soacommunity 22 Feb View media SOA Community Service Bus 11g Development Cookbook soacommunity.wordpress.com/2012/02/20/ser… #osb #soacommunity #ace #opn View media For regular information on Oracle SOA Suite become a member in the SOA Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) Blog Twitter LinkedIn Mix Forum Technorati Tags: soacommunity,twitter,Oracle,SOA Community,Jürgen Kress,OPN,SOA,BPM

    Read the article

  • Top 10 Reasons SQL Developer is Perfect for Oracle Beginners

    - by thatjeffsmith
    Learning new technologies can be daunting. If you’ve never used a Mac before, you’ll probably be a bit baffled at first. But, you’re probably at least coming from a desktop computing background (Windows), so you common frame of reference. But what if you’re just now learning to use a relational database? Yes, you’ve played with Access a bit, but now your employer or college instructor has charged you with becoming proficient with Oracle database. Here’s 10 reasons why I think Oracle SQL Developer is the perfect vehicle to help get you started. 1. It’s free No need to break into one of these… No start-up costs, no need to wrangle budget dollars from your company. Students don’t have any money after books and lab fees anyway. And most employees don’t like having to ask for ‘special’ software anyway. So avoid all of that and make sure the free stuff doesn’t suit your needs first. Upgrades are available on a regular base, also at no cost, and support is freely available via our public forums. 2. It will run pretty much anywhere Windows – check. OSX (Apple) – check. Unix – check. Linux – check. No need to start up a windows VM to run your Windows-only software in your lab machine. 3. Anyone can install it There’s no installer, no registry to be updated, no admin privs to be obtained. If you can download and extract files to your machine or USB storage device, you can run it. You can be up and running with SQL Developer in under 5 minutes. Here’s a video tutorial to see how to get started. 4. It’s ubiquitous I admit it, I learned a new word yesterday and I wanted an excuse to use it. SQL Developer’s everywhere. It’s had over 2,500,000 downloads in the past year, and is the one of the most downloaded items from OTN. This means if you need help, there’s someone sitting nearby you that can assist, and since they’re in the same tool as you, they’ll be speaking the same language. 5. Simple User Interface Up-up-down-down-Left-right-left-right-A-B-A-B-START will get you 30 lives, but you already knew that, right? You connect, you see your objects, you click on your objects. Or, you can use the worksheet to write your queries and programs in. There’s only one toolbar, and just a few buttons. If you’re like me, video games became less fun when each button had 6 action items mapped to it. I just want the good ole ‘A’, ‘B’, ‘SELECT’, and ‘START’ controls. If you’re new to Oracle, you shouldn’t have the double-workload of learning a new complicated tool as well. 6. It’s not a ‘black box’ Click through your objects, but also get the SQL that drives the GUI As you use the wizards to accomplish tasks for you, you can view the SQL statement being generated on your behalf. Just because you have a GUI, doesn’t mean you’re ceding your responsibility to learn the underlying code that makes the database work. 7. It’s four tools in one It’s not just a query tool. Maybe you need to design a data model first? Or maybe you need to migrate your Sybase ASE database to Oracle for a new project? Or maybe you need to create some reports? SQL Developer does all of that. So once you get comfortable with one part of the tool, the others will be much easier to pick up as your needs change. 8. Great learning resources available Videos, blogs, hands-on learning labs – you name it, we got it. Why wait for someone to train you, when you can train yourself at your own pace? 9. You can use it to teach yourself SQL Instead of being faced with the white-screen-of-panic, you can visually build your queries by dragging and dropping tables and views into the Query Builder. Yes, ‘just like Access’ – only better. And as you build your query, toggle to the Worksheet panel and see the SQL statement. Again, SQL Developer is not a black box. If you prefer to learn by trial and error, the worksheet will attempt to suggest the next bit of your SQL statement with it’s completion insight feature. And if you have syntax errors, those will be highlighted – just like your misspelled words in your favorite word processor. 10. It scales to match your experience level You won’t be a n00b forever. In 6-8 months, when you’re ready to tackle something a bit more complicated, like XML DB or Oracle Spatial, the tool is already there waiting on you. No need to go out and find the ‘advanced’ tool. 11. Wait, you said this was a ‘Top 10′ list? Yes. Yes, I did. I’m using this ‘trick’ to get you to continue reading because I’m going to say something you might not want to hear. Are you ready? Tools won’t replace experience, failure, hard work, and training. Just because you have the keys to the car, doesn’t mean you’re ready to head out on the race track. While SQL Developer reduces the barriers to entry, it does not completely remove them. Many experienced folks simply do not like tools. Rather, they don’t like the people that pick up tools without the know-how to properly use them. If you don’t understand what ‘TRUNCATE’ means, don’t try it out. Try picking up a book first. Of course, it’s very nice to have your own sandbox to play in, so you don’t upset the other children. That’s why I really like our Dev Days Database Virtual Box image. It’s your own database to learn and experiment with.

    Read the article

  • Use an Ubuntu Live CD to Securely Wipe Your PC’s Hard Drive

    - by Trevor Bekolay
    Deleting files or quickly formatting a drive isn’t enough for sensitive personal information. We’ll show you how to get rid of it for good using a Ubuntu Live CD. When you delete a file in Windows, Ubuntu, or any other operating system, it doesn’t actually destroy the data stored on your hard drive, it just marks that data as “deleted.” If you overwrite it later, then that data is generally unrecoverable, but if the operating system don’t happen to overwrite it, then your data is still stored on your hard drive, recoverable by anyone who has the right software. By securely delete files or entire hard drives, your data will be gone for good. Note: Modern hard drives are extremely sophisticated, as are the experts who recover data for a living. There is no guarantee that the methods covered in this article will make your data completely unrecoverable; however, they will make your data unrecoverable to the majority of recovery methods, and all methods that are readily available to the general public. Shred individual files Most of the data stored on your hard drive is harmless, and doesn’t reveal anything about you. If there are just a few files that you know you don’t want someone else to see, then the easiest way to get rid of them is a built-in Linux utility called shred. Open a terminal window by clicking on Applications at the top-left of the screen, then expanding the Accessories menu and clicking on Terminal. Navigate to the file that you want to delete using cd to change directories and ls to list the files and folders in the current directory. As an example, we’ve got a file called BankInfo.txt on a Windows NTFS-formatted hard drive. We want to delete it securely, so we’ll call shred by entering the following in the terminal window: shred <file> which is, in our example: shred BankInfo.txt Notice that our BankInfo.txt file still exists, even though we’ve shredded it. A quick look at the contents of BankInfo.txt make it obvious that the file has indeed been securely overwritten. We can use some command-line arguments to make shred delete the file from the hard drive as well. We can also be extra-careful about the shredding process by upping the number of times shred overwrites the original file. To do this, in the terminal, type in: shred –remove –iterations=<num> <file> By default, shred overwrites the file 25 times. We’ll double this, giving us the following command: shred –remove –iterations=50 BankInfo.txt BankInfo.txt has now been securely wiped on the physical disk, and also no longer shows up in the directory listing. Repeat this process for any sensitive files on your hard drive! Wipe entire hard drives If you’re disposing of an old hard drive, or giving it to someone else, then you might instead want to wipe your entire hard drive. shred can be invoked on hard drives, but on modern file systems, the shred process may be reversible. We’ll use the program wipe to securely delete all of the data on a hard drive. Unlike shred, wipe is not included in Ubuntu by default, so we have to install it. Open up the Synaptic Package Manager by clicking on System in the top-left corner of the screen, then expanding the Administration folder and clicking on Synaptic Package Manager. wipe is part of the Universe repository, which is not enabled by default. We’ll enable it by clicking on Settings > Repositories in the Synaptic Package Manager window. Check the checkbox next to “Community-maintained Open Source software (universe)”. Click Close. You’ll need to reload Synaptic’s package list. Click on the Reload button in the main Synaptic Package Manager window. Once the package list has been reloaded, the text over the search field will change to “Rebuilding search index”. Wait until it reads “Quick search,” and then type “wipe” into the search field. The wipe package should come up, along with some other packages that perform similar functions. Click on the checkbox to the left of the label “wipe” and select “Mark for Installation”. Click on the Apply button to start the installation process. Click the Apply button on the Summary window that pops up. Once the installation is done, click the Close button and close the Synaptic Package Manager window. Open a terminal window by clicking on Applications in the top-left of the screen, then Accessories > Terminal. You need to figure our the correct hard drive to wipe. If you wipe the wrong hard drive, that data will not be recoverable, so exercise caution! In the terminal window, type in: sudo fdisk -l A list of your hard drives will show up. A few factors will help you identify the right hard drive. One is the file system, found in the System column of  the list – Windows hard drives are usually formatted as NTFS (which shows up as HPFS/NTFS). Another good identifier is the size of the hard drive, which appears after its identifier (highlighted in the following screenshot). In our case, the hard drive we want to wipe is only around 1 GB large, and is formatted as NTFS. We make a note of the label found under the the Device column heading. If you have multiple partitions on this hard drive, then there will be more than one device in this list. The wipe developers recommend wiping each partition separately. To start the wiping process, type the following into the terminal: sudo wipe <device label> In our case, this is: sudo wipe /dev/sda1 Again, exercise caution – this is the point of no return! Your hard drive will be completely wiped. It may take some time to complete, depending on the size of the drive you’re wiping. Conclusion If you have sensitive information on your hard drive – and chances are you probably do – then it’s a good idea to securely delete sensitive files before you give away or dispose of your hard drive. The most secure way to delete your data is with a few swings of a hammer, but shred and wipe from a Ubuntu Live CD is a good alternative! Similar Articles Productive Geek Tips Reset Your Ubuntu Password Easily from the Live CDScan a Windows PC for Viruses from a Ubuntu Live CDRecover Deleted Files on an NTFS Hard Drive from a Ubuntu Live CDCreate a Bootable Ubuntu 9.10 USB Flash DriveCreate a Bootable Ubuntu USB Flash Drive the Easy Way TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Office 2010 Product Guides Google Maps Place marks – Pizza, Guns or Strip Clubs Monitor Applications With Kiwi LocPDF is a Visual PDF Search Tool Download Free iPad Wallpapers at iPad Decor Get Your Delicious Bookmarks In Firefox’s Awesome Bar

    Read the article

  • Oracle Data Mining a Star Schema: Telco Churn Case Study

    - by charlie.berger
    There is a complete and detailed Telco Churn case study "How to" Blog Series just posted by Ari Mozes, ODM Dev. Manager.  In it, Ari provides detailed guidance in how to leverage various strengths of Oracle Data Mining including the ability to: mine Star Schemas and join tables and views together to obtain a complete 360 degree view of a customer combine transactional data e.g. call record detail (CDR) data, etc. define complex data transformation, model build and model deploy analytical methodologies inside the Database  His blog is posted in a multi-part series.  Below are some opening excerpts for the first 3 blog entries.  This is an excellent resource for any novice to skilled data miner who wants to gain competitive advantage by mining their data inside the Oracle Database.  Many thanks Ari! Mining a Star Schema: Telco Churn Case Study (1 of 3) One of the strengths of Oracle Data Mining is the ability to mine star schemas with minimal effort.  Star schemas are commonly used in relational databases, and they often contain rich data with interesting patterns.  While dimension tables may contain interesting demographics, fact tables will often contain user behavior, such as phone usage or purchase patterns.  Both of these aspects - demographics and usage patterns - can provide insight into behavior.Churn is a critical problem in the telecommunications industry, and companies go to great lengths to reduce the churn of their customer base.  One case study1 describes a telecommunications scenario involving understanding, and identification of, churn, where the underlying data is present in a star schema.  That case study is a good example for demonstrating just how natural it is for Oracle Data Mining to analyze a star schema, so it will be used as the basis for this series of posts...... Mining a Star Schema: Telco Churn Case Study (2 of 3) This post will follow the transformation steps as described in the case study, but will use Oracle SQL as the means for preparing data.  Please see the previous post for background material, including links to the case study and to scripts that can be used to replicate the stages in these posts.1) Handling missing values for call data recordsThe CDR_T table records the number of phone minutes used by a customer per month and per call type (tariff).  For example, the table may contain one record corresponding to the number of peak (call type) minutes in January for a specific customer, and another record associated with international calls in March for the same customer.  This table is likely to be fairly dense (most type-month combinations for a given customer will be present) due to the coarse level of aggregation, but there may be some missing values.  Missing entries may occur for a number of reasons: the customer made no calls of a particular type in a particular month, the customer switched providers during the timeframe, or perhaps there is a data entry problem.  In the first situation, the correct interpretation of a missing entry would be to assume that the number of minutes for the type-month combination is zero.  In the other situations, it is not appropriate to assume zero, but rather derive some representative value to replace the missing entries.  The referenced case study takes the latter approach.  The data is segmented by customer and call type, and within a given customer-call type combination, an average number of minutes is computed and used as a replacement value.In SQL, we need to generate additional rows for the missing entries and populate those rows with appropriate values.  To generate the missing rows, Oracle's partition outer join feature is a perfect fit.  select cust_id, cdre.tariff, cdre.month, minsfrom cdr_t cdr partition by (cust_id) right outer join     (select distinct tariff, month from cdr_t) cdre     on (cdr.month = cdre.month and cdr.tariff = cdre.tariff);   ....... Mining a Star Schema: Telco Churn Case Study (3 of 3) Now that the "difficult" work is complete - preparing the data - we can move to building a predictive model to help identify and understand churn.The case study suggests that separate models be built for different customer segments (high, medium, low, and very low value customer groups).  To reduce the data to a single segment, a filter can be applied: create or replace view churn_data_high asselect * from churn_prep where value_band = 'HIGH'; It is simple to take a quick look at the predictive aspects of the data on a univariate basis.  While this does not capture the more complex multi-variate effects as would occur with the full-blown data mining algorithms, it can give a quick feel as to the predictive aspects of the data as well as validate the data preparation steps.  Oracle Data Mining includes a predictive analytics package which enables quick analysis. begin  dbms_predictive_analytics.explain(   'churn_data_high','churn_m6','expl_churn_tab'); end; /select * from expl_churn_tab where rank <= 5 order by rank; ATTRIBUTE_NAME       ATTRIBUTE_SUBNAME EXPLANATORY_VALUE RANK-------------------- ----------------- ----------------- ----------LOS_BAND                                      .069167052          1MINS_PER_TARIFF_MON  PEAK-5                   .034881648          2REV_PER_MON          REV-5                    .034527798          3DROPPED_CALLS                                 .028110322          4MINS_PER_TARIFF_MON  PEAK-4                   .024698149          5From the above results, it is clear that some predictors do contain information to help identify churn (explanatory value > 0).  The strongest uni-variate predictor of churn appears to be the customer's (binned) length of service.  The second strongest churn indicator appears to be the number of peak minutes used in the most recent month.  The subname column contains the interior piece of the DM_NESTED_NUMERICALS column described in the previous post.  By using the object relational approach, many related predictors are included within a single top-level column. .....   NOTE:  These are just EXCERPTS.  Click here to start reading the Oracle Data Mining a Star Schema: Telco Churn Case Study from the beginning.    

    Read the article

  • I Can't Install or Remove Any Application

    - by berkay gürsoy
    when i try to install or remove an application via either software center or apt-get install they both fail and give some debconf errors below is the log please help.Sorry some of the text is not english. sudo apt-get install aptitude Paket listeleri okunuyor... Bitti Bagimlilik agaci insa ediliyor. Durum bilgisi okunuyor... Bitti Asagidaki ek paketler de yüklenecek: aptitude-common libboost-iostreams1.49.0 libcwidget3 Önerilen paketler: aptitude-doc-en aptitude-doc tasksel debtags libcwidget-dev Asagidaki YENI paketler kurulacak: aptitude aptitude-common libboost-iostreams1.49.0 libcwidget3 Yükseltilen: 0, Yeni Kurulan: 4, Kaldirilacak: 0 ve Yükseltilmeyecek: 48. 8 tam olarak kurulmadi veya kaldirilmadi. Indirilmesi gereken dosya boyutu 0 B/2.498 kB Bu islemden sonra 10,4 MB ek disk alani kullanilacak. Devam etmek istiyor musunuz [E/h]? e Use of uninitialized value in concatenation (.) or string at /usr/share/perl5/Debconf/DbDriver/File.pm line 44, <DEBCONF_CONFIG> chunk 3. Use of uninitialized value in -e at /usr/share/perl5/Debconf/DbDriver/File.pm line 46, <DEBCONF_CONFIG> chunk 3. Use of uninitialized value in pattern match (m//) at /usr/share/perl5/Debconf/DbDriver/File.pm line 47, <DEBCONF_CONFIG> chunk 3. Use of uninitialized value $directory in -d at /usr/share/perl5/Debconf/DbDriver/File.pm line 48, <DEBCONF_CONFIG> chunk 3. Use of uninitialized value $directory in concatenation (.) or string at /usr/share/perl5/Debconf/DbDriver/File.pm line 49, <DEBCONF_CONFIG> chunk 3. debconf: DbDriver "config": mkdir :Böyle bir dosya ya da dizin yok Selecting previously unselected package aptitude-common. dpkg: uyari: files list file for package 'aspell' missing; assuming package has no files currently installed dpkg: uyari: files list file for package 'ubuntu-desktop' missing; assuming package has no files currently installed dpkg: uyari: files list file for package 'vuze' missing; assuming package has no files currently installed dpkg: uyari: files list file for package 'java-wrappers' missing; assuming package has no files currently installed (Veritabani okunuyor... 198988 files and directories currently installed.) Unpacking aptitude-common (from .../aptitude-common_0.6.8.1-2ubuntu1_all.deb) ... Selecting previously unselected package libboost-iostreams1.49.0. Unpacking libboost-iostreams1.49.0 (from .../libboost-iostreams1.49.0_1.49.0-3.1ubuntu1_amd64.deb) ... Selecting previously unselected package libcwidget3. Unpacking libcwidget3 (from .../libcwidget3_0.5.16-3.4ubuntu1_amd64.deb) ... Selecting previously unselected package aptitude. Unpacking aptitude (from .../aptitude_0.6.8.1-2ubuntu1_amd64.deb) ... wicd-daemon (1.7.2.4-2ubuntu1) kuruluyor... Use of uninitialized value in concatenation (.) or string at /usr/share/perl5/Debconf/DbDriver/File.pm line 44, <DEBCONF_CONFIG> chunk 3. Use of uninitialized value in -e at /usr/share/perl5/Debconf/DbDriver/File.pm line 46, <DEBCONF_CONFIG> chunk 3. Use of uninitialized value in pattern match (m//) at /usr/share/perl5/Debconf/DbDriver/File.pm line 47, <DEBCONF_CONFIG> chunk 3. Use of uninitialized value $directory in -d at /usr/share/perl5/Debconf/DbDriver/File.pm line 48, <DEBCONF_CONFIG> chunk 3. Use of uninitialized value $directory in concatenation (.) or string at /usr/share/perl5/Debconf/DbDriver/File.pm line 49, <DEBCONF_CONFIG> chunk 3. debconf: DbDriver "config": mkdir :Böyle bir dosya ya da dizin yok dpkg: error processing wicd-daemon (--configure): installed post-installation script alt islemi çikis durumunda hata döndürdü : 1 man-db (2.6.3-1) kuruluyor... Use of uninitialized value in concatenation (.) or string at /usr/share/perl5/Debconf/DbDriver/File.pm line 44, <DEBCONF_CONFIG> chunk 3. Use of uninitialized value in -e at /usr/share/perl5/Debconf/DbDriver/File.pm line 46, <DEBCONF_CONFIG> chunk 3. Use of uninitialized value in pattern match (m//) at /usr/share/perl5/Debconf/DbDriver/File.pm line 47, <DEBCONF_CONFIG> chunk 3. Use of uninitialized value $directory in -d at /usr/share/perl5/Debconf/DbDriver/File.pm line 48, <DEBCONF_CONFIG> chunk 3. Use of uninitialized value $directory in concatenation (.) or string at /usr/share/perl5/Debconf/DbDriver/File.pm line 49, <DEBCONF_CONFIG> chunk 3. debconf: DbDriver "config": mkdir :Böyle bir dosya ya da dizin yok dpkg: error processing man-db (--configure): installed post-installation script alt islemi çikis durumunda hata döndürdü : 1 dictionaries-common (1.12.10) kuruluyor... Use of uninitialized value in concatenation (.) or string at /usr/share/perl5/Debconf/DbDriver/File.pm line 44, <DEBCONF_CONFIG> chunk 3. Use of uninitialized value in -e at /usr/share/perl5/Debconf/DbDriver/File.pm line 46, <DEBCONF_CONFIG> chunk 3. Use of uninitialized value in pattern match (m//) at /usr/share/perl5/Debconf/DbDriver/File.pm line 47, <DEBCONF_CONFIG> chunk 3. Use of uninitialized value $directory in -d at /usr/share/perl5/Debconf/DbDriver/File.pm line 48, <DEBCONF_CONFIG> chunk 3. Use of uninitialized value $directory in concatenation (.) or string at /usr/share/perl5/Debconf/DbDriver/File.pm line 49, <DEBCONF_CONFIG> chunk 3. debconf: DbDriver "config": mkdir :Böyle bir dosya ya da dizin yok dpkg: error processing dictionaries-common (--configure): installed post-installation script alt islemi çikis durumunda hata döndürdü : 1 dpkg: dependency problems prevent configuration of aspell: aspell depends on dictionaries-common (>> 0.40); bununla beraber: Package dictionaries-common is not configured yet. dpkg: error processing aspell (--configure): bagimlilik sorunlari - yapilandirilmadan birakiliyor dpkg: dependency problems prevent configuration of aspell-en: aspell-en depends on aspell (>= 0.60.3-2); bununla beraber: Package aspell is not configured yet. aspell-en depends on dictionaries-common (>= 0.49.2); bununla beraber: Package dictionaries-common is not configured yet. dpkg: error processing aspell-en (--configure): bagimlilik sorunlari - yapilandirilmadan birakiliyor dpkg: dependency problems prevent configuration of hyphen-en-us: hyphen-en-us depends on dictionaries-common (>= 0.10) | openoffice.org-updatedicts; bununla beraber: Package dictionaries-common is not configured yet. openoffice.org-updatedicts paketi yüklenmedi. Package dictionaries-common which provides openoffice.org-updatedicts is not configured yet. dpkg: error processing hyphen-en-us (--configure): bagimlilik sorunlari - yapilandirilmadan birakiliyor dpkg: dependency problems prevent configuration of wicd-gtk: wicd-gtk depends on wicd-daemon (= 1.7.2.4-2ubuntu1); bununla beraber: Package wicd-daemon is not configured yet. dpkg: error processing wicd-gtk (--configure): bagimlilik sorunlari - yapilandirilmadan birakiliyor dpkg: dependency problems prevent configuration of wicd: wicd depends on wicd-daemon (= 1.7.2.4-2ubuntu1); bununla beraber: Package wicd-daemon is not configured yet. wicd depends on wicd-gtk (= 1.7.2.4-2ubuntu1) | wicd-curses (= 1.7.2.4-2ubuntu1) | wicd-cli (= 1.7.2.4-2ubuntu1) | wicd-client; bununla beraber: Package wicd-gtk is not configured yet. wicd-curses paketi yüklenmedi. wicd-cli paketi yüklenmedi. wicd-client paketi yüklenmedi. Package wicd-gtk which provides wicd-client is not configured yet. dpkg: error processing wicd (--configure): bagimlilik sorunlari - yapilandirilmadan birakiliyor aptitude-common (0.6.8.1-2ubuntu1) kuruluyor... libboost-iostreams1.49.0 (1.49.0-3.1ubuntu1) kuruluyor... libcwidget3 (0.5.16-3.4ubuntu1) kuruluyor... aptitude (0.6.8.1-2ubuntu1) kuruluyor... update-alternatives: using /usr/bin/aptitude-curses to provide /usr/bin/aptitude (aptitude) in Otomatik Mod Processing triggers for libc-bin ... ldconfig deferred processing now taking place Islem sirasinda hatalar bulundu: wicd-daemon man-db dictionaries-common aspell aspell-en hyphen-en-us wicd-gtk wicd E: Sub-process /usr/bin/dpkg returned an error code (1)

    Read the article

  • T4 Template error - Assembly Directive cannot locate referenced assembly in Visual Studio 2010 proje

    - by CodeSniper
    I ran into the following error recently in Visual Studio 2010 while trying to port Phil Haack’s excellent T4CSS template which was originally built for Visual Studio 2008.   The Problem Error Compiling transformation: Metadata file 'dotless.Core' could not be found In “T4 speak”, this simply means that you have an Assembly directive in your T4 template but the T4 engine was not able to locate or load the referenced assembly. In the case of the T4CSS Template, this was a showstopper for making it work in Visual Studio 2010. On a side note: The T4CSS template is a sweet little wrapper to allow you to use DotLessCss to generate static .css files from .less files rather than using their default HttpHandler or command-line tool.    If you haven't tried DotLessCSS yet, go check it out now!  In short, it is a tool that allows you to templatize and program your CSS files so that you can use variables, expressions, and mixins within your CSS which enables rapid changes and a lot of developer-flexibility as you evolve your CSS and UI. Back to our regularly scheduled program… Anyhow, this post isn't about DotLessCss, its about the T4 Templates and the errors I ran into when converting them from Visual Studio 2008 to Visual Studio 2010. In VS2010, there were quite a few changes to the T4 Template Engine; most were excellent changes, but this one bit me with T4CSS: “Project assemblies are no longer used to resolve template assembly directives.” In VS2008, if you wanted to reference a custom assembly in your T4 Template (.tt file) you would simply right click on your project, choose Add Reference and select that assembly.  Afterwards you were allowed to use the following syntax in your T4 template to tell it to look at the local references: <#@ assembly name="dotless.Core.dll" #> This told the engine to look in the “usual place” for the assembly, which is your project references. However, this is exactly what they changed in VS2010.  They now basically sandbox the T4 Engine to keep your T4 assemblies separate from your project assemblies.  This can come in handy if you want to support different versions of an assembly referenced both by your T4 templates and your project. Who broke the build?  Oh, Microsoft Did! In our case, this change causes a problem since the templates are no longer compatible when upgrading to VS 2010 – thus its a breaking change.  So, how do we make this work in VS 2010? Luckily, Microsoft now offers several options for referencing assemblies from T4 Templates: GAC your assemblies and use Namespace Reference or Fully Qualified Type Name Use a hard-coded Fully Qualified UNC path Copy assembly to Visual Studio "Public Assemblies Folder" and use Namespace Reference or Fully Qualified Type Name.  Use or Define a Windows Environment Variable to build a Fully Qualified UNC path. Use a Visual Studio Macro to build a Fully Qualified UNC path. Option #1 & 2 were already supported in Visual Studio 2008, so if you want to keep your templates compatible with both Visual Studio versions, then you would have to adopt one of these approaches. Yakkety Yak, use the GAC! Option #1 requires an additional pre-build step to GAC the referenced assembly, which could be a pain.  But, if you go that route, then after you GAC, all you need is a simple type name or namespace reference such as: <#@ assembly name="dotless.Core" #> Hard Coding aint that hard! The other option of using hard-coded paths in Option #2 is pretty impractical in most situations since each developer would have to use the same local project folder paths, or modify this setting each time for their local machines as well as for production deployment.  However, if you want to go that route, simply use the following assembly directive style: <#@ assembly name="C:\Code\Lib\dotless.Core.dll" #> Lets go Public! Option #3, the Visual Studio Public Assemblies Folder, is the recommended place to put commonly used tools and libraries that are only needed for Visual Studio.  Think of it like a VS-only GAC.  This is likely the best place for something like dotLessCSS and is my preferred solution.  However, you will need to either use an installer or a pre-build action to copy the assembly to the right folder location.   Normally this is located at:  C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE\PublicAssemblies Once you have copied your assembly there, you use the type name or namespace syntax again: <#@ assembly name="dotless.Core" #> Save the Environment! Option #4, using a Windows Environment Variable, is interesting for enterprise use where you may have standard locations for files, but less useful for demo-code, frameworks, and products where you don't have control over the local system.  The syntax for including a environment variable in your assembly directive looks like the following, just as you would expect: <#@ assembly name="%mypath%\dotless.Core.dll" #> “mypath” is a Windows environment variable you setup that points to some fully qualified UNC path on your system.  In the right situation this can be a great solution such as one where you use a msi installer for deployment, or where you have a pre-existing environment variable you can re-use. OMG Macros! Finally, Option #5 is a very nice option if you want to keep your T4 template’s assembly reference local and relative to the project or solution without muddying-up your dev environment or GAC with extra deployments.  An example looks like this: <#@ assembly name="$(SolutionDir)lib\dotless.Core.dll" #> In this example, I’m using the “SolutionDir” VS macro so I can reference an assembly in a “/lib” folder at the root of the solution.   This is just one of the many macros you can use.  If you are familiar with creating Pre/Post-build Event scripts, you can use its dialog to look at all of the different VS macros available. This option gives the best solution for local assemblies without the hassle of extra installers or other setup before the build.   However, its still not compatible with Visual Studio 2008, so if you have a T4 Template you want to use with both, then you may have to create multiple .tt files, one for each IDE version, or require the developer to set a value in the .tt file manually.   I’m not sure if T4 Templates support any form of compiler switches like “#if (VS2010)”  statements, but it would definitely be nice in this case to switch between this option and one of the ones more compatible with VS 2008. Conclusion As you can see, we went from 3 options with Visual Studio 2008, to 5 options (plus one problem) with Visual Studio 2010.  As a whole, I think the changes are great, but the short-term growing pains during the migration may be annoying until we get used to our new found power. Hopefully this all made sense and was helpful to you.  If nothing else, I’ll just use it as a reference the next time I need to port a T4 template to Visual Studio 2010.  Happy T4 templating, and “May the fourth be with you!”

    Read the article

< Previous Page | 287 288 289 290 291 292 293 294 295 296 297 298  | Next Page >