Search Results

Search found 1798 results on 72 pages for 'incoming'.

Page 64/72 | < Previous Page | 60 61 62 63 64 65 66 67 68 69 70 71  | Next Page >

  • Explanation of the init.d/scripts Fedora

    - by Shahmir Javaid
    Below is a copy of vsftpd, i need some explanations of some of the scripts mentioned below in this script: #!/bin/bash # ### BEGIN INIT INFO # Provides: vsftpd # Required-Start: $local_fs $network $named $remote_fs $syslog # Required-Stop: $local_fs $network $named $remote_fs $syslog # Short-Description: Very Secure Ftp Daemon # Description: vsftpd is a Very Secure FTP daemon. It was written completely from # scratch ### END INIT INFO # vsftpd This shell script takes care of starting and stopping # standalone vsftpd. # # chkconfig: - 60 50 # description: Vsftpd is a ftp daemon, which is the program \ # that answers incoming ftp service requests. # processname: vsftpd # config: /etc/vsftpd/vsftpd.conf # Source function library. . /etc/rc.d/init.d/functions # Source networking configuration. . /etc/sysconfig/network RETVAL=0 prog="vsftpd" start() { # Start daemons. # Check that networking is up. [ ${NETWORKING} = "no" ] && exit 1 [ -x /usr/sbin/vsftpd ] || exit 1 if [ -d /etc/vsftpd ] ; then CONFS=`ls /etc/vsftpd/*.conf 2>/dev/null` [ -z "$CONFS" ] && exit 6 for i in $CONFS; do site=`basename $i .conf` echo -n $"Starting $prog for $site: " daemon /usr/sbin/vsftpd $i RETVAL=$? echo if [ $RETVAL -eq 0 ]; then touch /var/lock/subsys/$prog break else if [ -f /var/lock/subsys/$prog ]; then RETVAL=0 break fi fi done else RETVAL=1 fi return $RETVAL } stop() { # Stop daemons. echo -n $"Shutting down $prog: " killproc $prog RETVAL=$? echo [ $RETVAL -eq 0 ] && rm -f /var/lock/subsys/$prog return $RETVAL } # See how we were called. case "$1" in start) start ;; stop) stop ;; restart|reload) stop start RETVAL=$? ;; condrestart|try-restart|force-reload) if [ -f /var/lock/subsys/$prog ]; then stop start RETVAL=$? fi ;; status) status $prog RETVAL=$? ;; *) echo $"Usage: $0 {start|stop|restart|try-restart|force-reload|status}" exit 1 esac exit $RETVAL Question I What the hell is the difference between the && and || signs in the below commands, and is it just an easy way to do a simple if check or is it completely different to a if[..something..]; then ..something.. fi: # Check that networking is up. [ ${NETWORKING} = "no" ] && exit 1 [ -x /usr/sbin/vsftpd ] || exit 1 Question II i get what -eq and -gt is (equal to, greater than) but is there a simple website that explains what -x, -d and -f are? Any help would be apreciated Running Fedora 12 on my OS. Script copied from /etc/init.d/vsftpd Question III It says required starts are $local_fs $network $named $remote_fs $syslog but i cant see any where it checks for those.

    Read the article

  • Sendmail Tuning For Batch Mail Jobs

    - by Kyle Brandt
    I have a webservers that send out emails to a sendmail relay server as a batch job. The emails need to be accepted by the relay sendmail server as fast as possible, however, they do not need to go out (be relayed) very quickly. I am seeing a couple timeouts once and a while from the webserver trying to connect to the relay server. The load currently is about 30 emails a second for a couple minutes. There are quite a few tuning options for sendmail in the sendmail tuning guide. What I am focusing on now is the Delivery Mode: Delivery Mode There are a number of delivery modes that sendmail can operate in, set by the DeliveryMode ( d) configuration option. These modes specify how quickly mail will be delivered. Legal modes are: i deliver interactively (synchronously) b deliver in background (asynchronously) q queue only (don't deliver) d defer delivery attempts (don't deliver) There are tradeoffs. Mode i gives the sender the quickest feedback, but may slow down some mailers and is hardly ever necessary. Mode b delivers promptly but can cause large numbers of processes if you have a mailer that takes a long time to deliver a message. Mode q minimizes the load on your machine, but means that delivery may be delayed for up to the queue interval. Mode d is identical to mode q except that it also prevents lookups in maps including the -D flag from working during the initial queue phase; it is intended for ``dial on demand'' sites where DNS lookups might cost real money. Some simple error messages (e.g., host unknown during the SMTP protocol) will be delayed using this mode. Mode b is the usual default. If you run in mode q (queue only), d (defer), or b (deliver in background) sendmail will not expand aliases and follow .forward files upon initial receipt of the mail. This speeds up the response to RCPT commands. Mode i should not be used by the SMTP server. I currently have the CentOS default modes: Sendmail.cf: DeliveryMode=background Submit.cf: DeliveryMode=i Is sendmail.cf/mc for outgoing email from relay (to the intertubes) and sumbit.cf/mc for incoming eamil (from my webservers). Would it make sense to change the outgoing delivery mode to queue? If I did, what would the outbound email flow behave like? If this is the right thing to do, can anyone show me example mc configurations for this change? If it isn't, what recommendations are there for these constraints?

    Read the article

  • iptables management tools for large scale environment

    - by womble
    The environment I'm operating in is a large-scale web hosting operation (several hundred servers under management, almost-all-public addressing, etc -- so anything that talks about managing ADSL links is unlikely to work well), and we're looking for something that will be comfortable managing both the core ruleset (around 12,000 entries in iptables at current count) plus the host-based rulesets we manage for customers. Our core router ruleset changes a few times a day, and the host-based rulesets would change maybe 50 times a month (across all the servers, so maybe one change per five servers per month). We're currently using filtergen (which is balls in general, and super-balls at our scale of operation), and I've used shorewall in the past at other jobs (which would be preferable to filtergen, but I figure there's got to be something out there that's better than that). The "musts" we've come up with for any replacement system are: Must generate a ruleset fairly quickly (a filtergen run on our ruleset takes 15-20 minutes; this is just insane) -- this is related to the next point: Must generate an iptables-restore style file and load that in one hit, not call iptables for every rule insert Must not take down the firewall for an extended period while the ruleset reloads (again, this is a consequence of the above point) Must support IPv6 (we aren't deploying anything new that isn't IPv6 compatible) Must be DFSG-free Must use plain-text configuration files (as we run everything through revision control, and using standard Unix text-manipulation tools are our SOP) Must support both RedHat and Debian (packaged preferred, but at the very least mustn't be overtly hostile to either distro's standards) Must support the ability to run arbitrary iptables commands to support features that aren't part of the system's "native language" Anything that doesn't meet all these criteria will not be considered. The following are our "nice to haves": Should support config file "fragments" (that is, you can drop a pile of files in a directory and say to the firewall "include everything in this directory in the ruleset"; we use configuration management extensively and would like to use this feature to provide service-specific rules automatically) Should support raw tables Should allow you to specify particular ICMP in both incoming packets and REJECT rules Should gracefully support hostnames that resolve to more than one IP address (we've been caught by this one a few times with filtergen; it's a rather royal pain in the butt) The more optional/weird iptables features that the tool supports (either natively or via existing or easily-writable plugins) the better. We use strange features of iptables now and then, and the more of those that "just work", the better for everyone.

    Read the article

  • Wireless traffic stops when downloading large files at high speed: packets lost (Linksys WRT120N router)

    - by Torious
    The problem Note: First I'd like to understand WHY this is happening. Ofcourse, a solution would be nice too. :) When downloading a large file over HTTP at high-speeds, my wireless traffic basically stops: I can't open webpages and the download itself pauses. It pauses pretty much immediately after starting it; sometimes at 800 KB, sometimes at a few MB. After some time, the download (and other traffic) resumes, but the problem keeps reoccurring during the same download. The problem does not occur when using a wired connection through the same router (Linskys WRT120N). Also note that the connection is not dropped when this happens. It's just that the traffic stops and I can't browse to web pages, etc. (SYN packets are sent but nothing is received, etc.) Inspection with Wireshark shows that the following happens: Server sends data packets which are acknowledged by client Server sends a packet, but SEQ indicates some packets were lost (6 packets in one occurrence). Server sends a few more packets and client acknowledges these using "selective acknowledgement" Server stops sending data for a while (since the lost packets were not acknowledged or the router stops forwarding them?) Eventually, server does a "retransmission" and traffic resumes as normal. This all seems normal behavior to me when packet loss occurs. It's the consistent packet loss throughout a large, high-speed download that puzzles me. What might cause this? My own idea is the following: My internet is pretty fast (100 mbps), so when starting a large-file download, the router buffers the incoming data (since wireless introduces some slight delay / lower speed, in part due to other networks), but the buffer overflows and the router drops packets to regulate traffic (and because it has no choice). But how could that happen? Doesn't the TCP window size limit the amount of data that can go unacknowledged? So how can the router's buffer overflow if there can only be like 64 KB waiting to be acknowledged? Note: I've disabled TCP window scaling and dynamic window size through netsh options, in an attempt to fix this, but it doesn't seem to matter. Also, Wireshark shows a pattern of the server sending 2 packets (of 1514 bytes) and the client sending an ACK, so does that rule out a possible buffer overflow? And a few more subsequent packets are received... I'm at a loss here. Thanks for any insights. Things that are (probably) NOT the cause / I have experimented with The browser Various TCP options in Windows 7 (netsh etc.) Router settings such as MTU, beacon interval, UPnP, ...

    Read the article

  • Cannot connect to WEBrick on home network

    - by Chris Stewart
    I'm an Android developer and often my applications require server-side code. I typically use Ruby on Rails for the web app, and during development will run the server on my local machine (Mac OS X) with WEBrick. In the morning when I get to the office, I'll run ifconfig in the console to see what IP my laptop has been given that day. I'll use that IP in my Android app when making requests to the web app in question. This all works fine, when I'm in my office. When I get home, I attempt to do the same thing, find my laptop's IP via ifconfig, set it in my app's config file, but the destination can never be found. To exclude my app from the set of hurdles, I attempt to visit the web server IP (e.g., http://192.168.1.4:3000) from my phone's browser, and it cannot connect. If I try from my laptop, which is running the web server, it works fine. If I try from another machine, on the same network, it also is unable to connect. Given this, I think I've narrowed it down to some kind of configuration in my home network, but I frankly have no idea what the cause could be. I don't have anything special at home, your basic Verizon FiOS router/modem with everything connected via Wi-Fi (Wi-Fi for both phone and laptop at work as well, fyi). I've tried disabling the firewall on my Verizon router, enabling port forwarding, and just about everything else I could do for port 3000, and nothing has changed. Dear Server Fault geniuses, please help a poor developer out. :) Edit: Some follow up items to add. My Mac's firewall is not active, and all incoming requests are allowed. I've also verified on my phone and laptop, that they're on the same network (192.168.1.4 Mac, 192.168.1.9 Phone). I have no idea why this isn't working. Edit 2: I went into System Preferences, enabled Web Sharing, and tried to view the website from my phone and it didn't connect. So it's not WEBrick or related to Rails. The firewall on my machine is off and the firewall on my router is off. Edit 3: Some progress. I set up port forwarding for port 3000 to my laptop, found the external IP, and used that and it connected fine. So, there's definitely something not quite set up correctly on my internal network.

    Read the article

  • Port forwarding using a BT Home Hub 2.0 (Supplied to new BT Infinity Customers in the UK)

    - by Jasarien
    I don't usually have trouble with port forwarding, I've been able to do it successfully on a number of different routers, including Linksys, Belkin, Netgear and Apple (Time Capsule / Airport Extreme). So I'm quite confused here. I had been using my Apple Time Capsule as my router for a few years now, with several port mappings all working fine. But it died recently, so I've had to resort to using the BT Home Hub 2.0 that was supplied with my BT Infinity broadband subscription. The forwarding interface for the Home Hub is simplified for the most part, allowing you to select an application or game and assign it to a particular computer on the network which you choose from a list that the Home Hub has 'discovered'. My Mac Pro has a manually assigned static IP 192.168.1.4 and my router is static at 192.168.1. I have chosen SSH from the list of applications and assigned it to my Mac Pro (the only computer in the list currently). The Home Hub also has a feature to keep a DNS service updated, and I have set it to keep my external IP address updated on my hostname. This is how I had it setup in the past with other routers and not had trouble before. I am able to ping my hostname (and external IP) from outside the network and get a response. But when I try to connect using SSH, the connection times out. The Home Hub also has "Firewall settings". The currently selected setting is: Default: Allow all outgoing connections and block all incoming traffic. Games and application sharing is allowed. But I've tried changing it to: Disabled: All traffic is allowed to pass through your BT Home Hub to your devices. Note: you’ll still need to use the games and application sharing feature to make sure that certain applications work properly. And the connection still times out... So frustrating. The OS X firewall on my Mac is disabled, so I don't think that's in the way. I have tried setting the port forwarding manually, instead of relying on the preset "SSH" option (incase it's not using the port I expect). So I set up my own "application" (as the Home Hub calls it) and forwarded external port 22 TCP to internal port 22 TCP to 192.168.1.4 - but that just gives the same result - unable to connect. Next, with the router's firewall disabled and OS X's firewall disabled, I ran the Shields Up test (https://www.grc.com/x/ne.dll?bh0bkyd2) and the result was that all my service ports (0 - 1055) are in 'Stealth' mode. I.e. nothing even exists at my IP as far as any outsider is concerned... Strange. The only thing that seems to work is setting my Mac Pro as the DMZ - which I don't want to do for obvious reasons. Any help with this would be extremely appreciated, thanks.

    Read the article

  • Switch flooding when bonding interfaces in Linux

    - by John Philips
    +--------+ | Host A | +----+---+ | eth0 (AA:AA:AA:AA:AA:AA) | | +----+-----+ | Switch 1 | (layer2/3) +----+-----+ | +----+-----+ | Switch 2 | +----+-----+ | +----------+----------+ +-------------------------+ Switch 3 +-------------------------+ | +----+-----------+----+ | | | | | | | | | | eth0 (B0:B0:B0:B0:B0:B0) | | eth4 (B4:B4:B4:B4:B4:B4) | | +----+-----------+----+ | | | Host B | | | +----+-----------+----+ | | eth1 (B1:B1:B1:B1:B1:B1) | | eth5 (B5:B5:B5:B5:B5:B5) | | | | | | | | | +------------------------------+ +------------------------------+ Topology overview Host A has a single NIC. Host B has four NICs which are bonded using the balance-alb mode. Both hosts run RHEL 6.0, and both are on the same IPv4 subnet. Traffic analysis Host A is sending data to Host B using some SQL database application. Traffic from Host A to Host B: The source int/MAC is eth0/AA:AA:AA:AA:AA:AA, the destination int/MAC is eth5/B5:B5:B5:B5:B5:B5. Traffic from Host B to Host A: The source int/MAC is eth0/B0:B0:B0:B0:B0:B0, the destination int/MAC is eth0/AA:AA:AA:AA:AA:AA. Once the TCP connection has been established, Host B sends no further frames out eth5. The MAC address of eth5 expires from the bridge tables of both Switch 1 & Switch 2. Switch 1 continues to receive frames from Host A which are destined for B5:B5:B5:B5:B5:B5. Because Switch 1 and Switch 2 no longer have bridge table entries for B5:B5:B5:B5:B5:B5, they flood the frames out all ports on the same VLAN (except for the one it came in on, of course). Reproduce If you ping Host B from a workstation which is connected to either Switch 1 or 2, B5:B5:B5:B5:B5:B5 re-enters the bridge tables and the flooding stops. After five minutes (the default bridge table timeout), flooding resumes. Question It is clear that on Host B, frames arrive on eth5 and exit out eth0. This seems ok as that's what the Linux bonding algorithm is designed to do - balance incoming and outgoing traffic. But since the switch stops receiving frames with the source MAC of eth5, it gets timed out of the bridge table, resulting in flooding. Is this normal? Why aren't any more frames originating from eth5? Is it because there is simply no other traffic going on (the only connection is a single large data transfer from Host A)? I've researched this for a long time and haven't found an answer. Documentation states that no switch changes are necessary when using mode 6 of the Linux interface bonding (balance-alb). Is this behavior occurring because Host B doesn't send any further packets out of eth5, whereas in normal circumstances it's expected that it would? One solution is to setup a cron job which pings Host B to keep the bridge table entries from timing out, but that seems like a dirty hack.

    Read the article

  • What the best way to achieve RPO of zero and lowest possible RTO (less than 15 minutes) with SQL 2008 R2?

    - by Adrian Hope-Bailie
    We are running a payments (EFT transaction processing) application which is processing high volumes of transactions 24/7 and are currently investigating a better way of doing DB replication to our disaster recovery site. Our current and previous strategies have included using both DoubleTake and Redgate to replicate data to a warm stand-by. DoubleTake is the supported solution from the payments software vendor however their (DoubleTake's) support in South Africa is very poor. We had a few issues and simply couldn't ever resolve them so we had to give up on DoubleTake. We have been using Redgate to manually read the data from the primary site (via queries) and write to the DR site but this is: A bad solution Getting the software vendor hot and bothered whenever we have support issues as it has a tendency to interfere with the payment application which is very DB intensive. We recently upgraded the whole system to run on SQL 2008 R2 Enterprise which means we should probably be looking at using some of the built-in replication features. The server has 2 fairly large databases with a mixture of tables containing highly volatile transactional data and pretty static configuration data. Replication would be done over a WAN link to a separate physical site and needs to achieve the following objectives. RPO: Zero loss - This is transactional data with financial impact so we can't lose anything. RTO: Tending to zero - The business depends on our ability to process transactions every minute we are down we are losing money I have looked at a few of the other questions/answers but none meet our case exactly: SQL Server 2008 failover strategy - Log shipping or replication? How to achieve the following RTO & RPO with logshipping only using SQL Server? What is the best of two approaches to achieve DB Replication? My current thinking is that we should use mirroring but I am concerned that for RPO:0 we will need to do delayed commits and this could impact the performance of the primary DB which is not an option. Our current DR process is to: Stop incoming traffic to the primary site and allow all in-flight transaction to complete. Allow the replication to DR to complete. Change network routing to route to DR site. Start all applications and services on the secondary site (Ideally we can change this to a warmer stand-by whereby the applications are already running but not processing any transactions). In other words the DR database needs to, as quickly as possible, catch up with primary and be ready for processing as the new primary. We would then need to be able to reverse this when we are ready to switch back. Is there a better option than mirroring (should we be doing log-shipping too) and can anyone suggest other considerations that we should keep in mind?

    Read the article

  • How to configure DD-WRT routing table when creating an isolated network segment for PCI C VT compliance

    - by tetranz
    I'm the volunteer support and system admin person at a small private school. We need to setup a PCI compliant Windows PC as a virtual terminal for credit card processing. I've read questionnaire SAQ C-VT and, to quote, this computer needs to be accessed: "via a computer that is isolated in a single location, and is not connected to other locations or systems within your environment (this can be achieved via a firewall or network segmentation to isolate the computer from other systems)" Our setup is as follows: DSL modem from ISP is setup to be a "transparent pipe" with no extra services. That goes into the WAN port of Linksys WRT54-GL running a DD-WRT. The LAN is 192.168.1.x. There are a couple of other WRT54-GL / DD-WRT devices. One is used as a wireless AP and another is a client bridge. To isolate the VT (virtual terminal) machine, I have another DD-WRT device. Its WAN is connected to a port on the 192.168.1.x LAN. The virtual terminal machine is connected to its LAN which is at 192.168.10.x. The SPI Firewall etc is turned on. It's basically the default DD-WRT gateway setup where the "ISP" is our own LAN. That's working. All incoming traffic to the VT machine is blocked, including from our own LAN. The VT can access the internet BUT, and here's the problem, it can also ping any of the computers on the 192.168.1.x LAN. I think I need to stop that. I'm guessing that I could do something with the Static Routing table in the VT machine's DD-WRT device. I need to route anything going to 192.168.1.x other than the gateway which is 192.168.1.1 to 0.0.0.0 or something like that. That's where I'm stuck at the end of my knowledge. Or ... do I need to get yet another DD-WRT so the network is "balanced". Maybe I need to have the internet from the DSL going into a DD-WRT which has only two devices on its LAN i.e., two other DD-WRTs, one for the main LAN and one for the VT. I think that would do but I'd like to avoid the extra cost and complexity if I don't need it. Thanks

    Read the article

  • Looking for a new, free firewall (Sunbelt has a huge hole)

    - by Jason
    I've been using Sunbelt Personal Firewall v. 4.5 (previously Kerio). I've discovered that blocking Firefox connections in the configuration doesn't stop EXISTING Firefox connections. (See my post here yesterday http://superuser.com/questions/132625/sunbelt-firewall-4-5-wont-block-firefox) The "stop all traffic" may work on existing connections - but I'm done testing, as I need to be able to be selective, at any time. I was using the free version, so the "web filtering" option quit working after some time (mostly blocking ads and popups), but I didn't use that anyway. I used the last free version of Kerio before finally having to go to Sunbelt, because Kerio had an unfixed bug where you'd eventually get the BSOD and have to reset Kerio's configuration and start over (configure everything again). So I'm looking for a new Firewall. I don't like ZoneAlarm at all (no offense to all it's users that may be here - personal taste). I need the following: (Sunbelt has all these, except *) - 1. Be able to block in/out to localhost (trusted)/internet selectively for each application with a click (so there's 4 click boxes for each application) [*that effects everything immediately, regardless of what's already connected]. When a new application attempts a connection, you get an allow/deny/remember windows. - 2. Be able to easily set up filter rules for 'individual application'/'all applications,' by protocol, port/address (range), local, remote, in, out. [*Adding a filter rule also doesn't block existing connections in Sunbelt. That needs to work too.] - 3. Have an easy-to-get-to way to "stop all traffic" (like a right click option on the running icon in the task bar). - 4. Be able to set trusted/internet in/out block/allowed (4 things per item) for each of IGMP, ping, DNS, DHCP, VPN, and broadcasts. - 5. Define locahost as trusted/untrusted, define adapter connections as trusted/untrusted. - 6. Block incoming connetions during boot-up and shutdown. - 7. Show existing connections, including local & remote ip/port, protocol, current speed, total bytes transferred, and local ports opened for Listening. - 8. An Intrusion Prevention System which blocks (optionally select each one) known intrustions (long list). - 9. Block/allow applications from starting other applications (deny/allow/remember window). Wish list: A way of knowing what svchost.exe is doing - who is actually using it/calling it. I allowed it for localhost, and selectively allowed it for internet each time the allow/deny window came up. Thanks for any help/suggestions. (I'm using Windows XP SP3.)

    Read the article

  • Server 2012 intermittently fails to respond to pings from single host, even with firewall disabled, but responds to non-ICMP requests fine

    - by James Westbury
    This one is kind of weird. I've got the following machines involved: DC01 - 10.1.2.42, Server 2012, domain controller & DNS server, physical machine nagiosv - 10.1.2.35, CentOS 6.4, Nagios, virtual machine CB01 - 10.1.3.81, Ubuntu 12.04 LTS, couchbase server, virtual machine So, I noticed something was wrong while configuring this new Nagios VM. I started seeing DC01's state flapping. I logged into nagiosv when I saw this happening, and attempted to ping DC01, both by FQDN and its IP address. Neither worked. I tried pinging the machine from CB01, which is another VM on the same virtual switch/physical NIC as nagiosv, and that worked fine. Pings still failing from nagiosv at this time. DC01 is also an internal DNS server, so I ran dig google.com from nagiosv, and was able to run a query against DC01 just fine: ;; Query time: 1 msec ;; SERVER: 10.1.2.42#53(10.1.2.42) ;; WHEN: Fri Nov 1 07:53:51 2013 ;; MSG SIZE rcvd: 204 Pings still failing from nagiosv, though. I can ping from DC01 to nagiosv, and that works, and I can still ping from other VMs on the same physical NIC into DC01, and that works. I should mention at this point that I've disabled the firewall on DC01 for testing purposes, and it doesn't make a damned bit of difference. (Even with the firewall enabled, I have a blanket exception for ICMP from the local subnet, so it shouldn't make a difference, but I figured I should test it anyway.) I loaded up Wireshark on DC01 and pinged it from nagiosv again. What I see is a bunch of echo requests coming in and not a single reply going back out. Filtered results here, showing all ICMP traffic during a 15-second period. A few more bits of info: There are no IP conflicts on the network. MAC addresses on the incoming pings match the MAC on the VM. There are no duplicate MACs on the network, as far as I can see. I have absolutely no idea why DC01 is failing to respond, here. Any ideas?

    Read the article

  • Why am I unable to send an attachment with Outlook via SMTP that I am able to send via Gmail / Google Apps?

    - by cwd
    I have Google Apps installed and I have tried to set up Outlook 2007 to send messages via SMTP. I followed the guide, selecting what I believe are all the correct settings. Yes, I am using POP for incoming, that is intentional but I don't believe it should affect outgoing messages. When I log into gmail (google apps) for my company, I can send a message that has an 8MB attachment (pdf file, not zipped or anything) and it sends fine. However, when I send the same message in Outlook with that same 8mb attachment it fails. Why am I unable to send an attachment with Outlook via SMTP that I am able to send via Gmail / Google Apps? The message headers are (some info omitted for privacy): Technical details of permanent failure: Google tried to deliver your message, but it was rejected by the recipient domain. We recommend contacting the other email provider for further information about the cause of this error. The error that the other server returned was: 552 552 #5.3.4 message size exceeds limit (state 17). ----- Original message ----- DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=company.com; s=google; h=from:to:cc:references:in-reply-to:subject:date:message-id :mime-version:content-type:x-mailer:thread-index:content-language; bh=7d4i/Cbt0v0sY3zt5lN6y5CdvxjbRmTBG4AuBuMxtF4=; b=IJwwxuIEdg1E4zXuGjeDod+1w3RYBBCNzSsqpuX77ih36HSiq++s3ZCQXPeU9CIZVg K8JPJQu9xjivYYjrRaYwyeowLIu0GIdR2h4kKEkFM/GNC2RFF3VwVgj+gvi5eqVZIuWn osT5/VEm10IED6B54NPOtGMgFTci6a57zzVKE= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=from:to:cc:references:in-reply-to:subject:date:message-id :mime-version:content-type:x-mailer:thread-index:content-language :x-gm-message-state; bh=7d4i/Cbt0v0sY3zt5lN6y5CdvxjbRmTBG4AuBuMxtF4=; b=LjTecjok5K71Bymp6tZqAL2XCz03hWROV1mTK8Vf2AeEJwtel9ACu9kE5jW5iJqckb upYKPzoqYLBwAPOzMb9asWoTAZPzC7LMG65eDUc2/ZEvGqXrZs3ziUxwhF4t169yRVuy /6nm/aAt5uPMLPdobxGTJ8ahOIku1Z3gW+OcvZ6ERk1Av/bvuln09vcnyJIrHGh7eK8n cbGVxmK0aecgSPgIj2NALbHkyuxwj+LEBRV6uiz3THDjxAiNfsO5UFjV59sD+lVSBT3z ThOGE8WEXRnKHuP3FuKXyeUxKBZ2CxpWJpvDuS9EsFkln7zkISYEsRA0nUA6GSGi2Z/n 8YUg== Received: by 10.60.169.197 with SMTP id ag5mr12254920oec.137.1351036287413; Tue, 23 Oct 2012 16:51:27 -0700 (PDT) References: Date: Tue, 23 Oct 2012 19:51:16 -0400 Message-ID: <003a01cdb179$4bb2ca60$e3185f20$@com> MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_NextPart_000_003B_01CDB157.C4A12A60" X-Mailer: Microsoft Office Outlook 12.0 Thread-Index: Ac2xVCHGxoC7DDOkQBK3JSXowHb0EQAEB7agAAA/YKAAAIGcQAAAngfQAABAAPAAAFe7gAAAadvw AALgvLA= Content-Language: en-us X-Gm-Message-State: ALoCoQniMq7Fnh+NlfoWjTJPvKWbkhEaftSaFo9ZVvtRpWufTmhlRDx1a9Jf+wmYcbRh896gygNr The company I am sending email to is a company that uses Google Apps for Teams. This is their apps admin login. Should I be worried about that message? My Settings On the Google apps side I have set my SPF record and set / verified my DKIM key. Here are my outlook settings: Why am I unable to send an attachment with Outlook via SMTP that I am able to send via Gmail / Google Apps?

    Read the article

  • Tips for maximizing Nginx requests/sec?

    - by linkedlinked
    I'm building an analytics package, and project requirements state that I need to support 1 billion hits per day. Yep, "billion". In other words, no less than 12,000 hits per second sustained, and preferably some room to burst. I know I'll need multiple servers for this, but I'm trying to get maximum performance out of each node before "throwing more hardware at it". Right now, I have the hits-tracking portion completed, and well optimized. I pretty much just save the requests straight into Redis (for later processing with Hadoop). The application is Python/Django with a gunicorn for the gateway. My 2GB Ubuntu 10.04 Rackspace server (not a production machine) can serve about 1200 static files per second (benchmarked using Apache AB against a single static asset). To compare, if I swap out the static file link with my tracking link, I still get about 600 requests per second -- I think this means my tracker is well optimized, because it's only a factor of 2 slower than serving static assets. However, when I benchmark with millions of hits, I notice a few things -- No disk usage -- this is expected, because I've turned off all Nginx logs, and my custom code doesn't do anything but save the request details into Redis. Non-constant memory usage -- Presumably due to Redis' memory managing, my memory usage will gradually climb up and then drop back down, but it's never once been my bottleneck. System load hovers around 2-4, the system is still responsive during even my heaviest benchmarks, and I can still manually view http://mysite.com/tracking/pixel with little visible delay while my (other) server performs 600 requests per second. If I run a short test, say 50,000 hits (takes about 2m), I get a steady, reliable 600 requests per second. If I run a longer test (tried up to 3.5m so far), my r/s degrades to about 250. My questions -- a. Does it look like I'm maxing out this server yet? Is 1,200/s static files nginx performance comparable to what others have experienced? b. Are there common nginx tunings for such high-volume applications? I have worker threads set to 64, and gunicorn worker threads set to 8, but tweaking these values doesn't seem to help or harm me much. c. Are there any linux-level settings that could be limiting my incoming connections? d. What could cause my performance to degrade to 250r/s on long-running tests? Again, the memory is not maxing out during these tests, and HDD use is nil. Thanks in advance, all :)

    Read the article

  • 2 servers, high availability and faster response

    - by user17886
    I recently bought a second webserver because I worry about hardware failure of my old server. Now that I have that second server I wish to do a little more then just have one server standby and replicate all day. As long as it's there I might as well get some advantage our of it ! I have a website powered by ubuntu 12.04, nginx, php-fpm, apc, mysql (5.5) and couchdb. Im currently testing configurations where i can achieve failover AND make good use of the extra harware for faster responses / distributed load. The setup I am testing nowinvolves heartbeat for ip failover and two identical servers. Of the two servers only one has a public ip adress. If one server crashes the other server takes over the public ip adress. On an incoming request nginx forwards the request tot php-fpm to either server a of server b (50/50 if both servers are alive). Once the request has been send to php-fpm both servers look at localhost for the mysql server. I use master-master mysql replication for this. The file system is synced with lsyncd. This works pretty well but Im reading it's discouraged by the (mysql) community. Another option I could think of is to use one server as a mysql master and one server as a web/php server. The servers would still sync their filesystem, would still run the same duplicate software (nginx,mysql) but master slave mysql replication could be used. As long as bother servers are alive I could just prefer nginx to listen to ip a and mysql to ip b. If one server is down, the other server could take over the task of the other server, simply by ip switching. But im completely new at this so I would greatly value your expert advice. Is either of the two setups any good ? If you have any thoughts on this please let me know ! PS, virtualisation, hosting on different locations or active/passive setups are not solutions im looking for. I find virtual server either too slow or too expensive. I already have a passive failover on another location. But in case of a crash I found the site was still unreachable for too long due to dns caching.

    Read the article

  • SQL SERVER – Advanced Data Quality Services with Melissa Data – Azure Data Market

    - by pinaldave
    There has been much fanfare over the new SQL Server 2012, and especially around its new companion product Data Quality Services (DQS). Among the many new features is the addition of this integrated knowledge-driven product that enables data stewards everywhere to profile, match, and cleanse data. In addition to the homegrown rules that data stewards can design and implement, there are also connectors to third party providers that are hosted in the Azure Datamarket marketplace.  In this review, I leverage SQL Server 2012 Data Quality Services, and proceed to subscribe to a third party data cleansing product through the Datamarket to showcase this unique capability. Crucial Questions For the purposes of the review, I used a database I had in an Excel spreadsheet with name and address information. Upon a cursory inspection, there are miscellaneous problems with these records; some addresses are missing ZIP codes, others missing a city, and some records are slightly misspelled or have unparsed suites. With DQS, I can easily add a knowledge base to help standardize my values, such as for state abbreviations. But how do I know that my address is correct? And if my address is not correct, what should it be corrected to? The answer lies in a third party knowledge base by the acknowledged USPS certified address accuracy experts at Melissa Data. Reference Data Services Within DQS there is a handy feature to actually add reference data from many different third-party Reference Data Services (RDS) vendors. DQS simplifies the processes of cleansing, standardizing, and enriching data through custom rules and through service providers from the Azure Datamarket. A quick jump over to the Datamarket site shows me that there are a handful of providers that offer data directly through Data Quality Services. Upon subscribing to these services, one can attach a DQS domain or composite domain (fields in a record) to a reference data service provider, and begin using it to cleanse, standardize, and enrich that data. Besides what I am looking for (address correction and enrichment), it is possible to subscribe to a host of other services including geocoding, IP address reference, phone checking and enrichment, as well as name parsing, standardization, and genderization.  These capabilities extend the data quality that DQS has natively by quite a bit. For my current address correction review, I needed to first sign up to a reference data provider on the Azure Data Market site. For this example, I used Melissa Data’s Address Check Service. They offer free one-month trials, so if you wish to follow along, or need to add address quality to your own data, I encourage you to sign up with them. Once I subscribed to the desired Reference Data Provider, I navigated my browser to the Account Keys within My Account to view the generated account key, which I then inserted into the DQS Client – Configuration under the Administration area. Step by Step to Guide That was all it took to hook in the subscribed provider -Melissa Data- directly to my DQS Client. The next step was for me to attach and map in my Reference Data from the newly acquired reference data provider, to a domain in my knowledge base. On the DQS Client home screen, I selected “New Knowledge Base” under Knowledge Base Management on the left-hand side of the home screen. Under New Knowledge Base, I typed a Name and description of my new knowledge base, then proceeded to the Domain Management screen. Here I established a series of domains (fields) and then linked them all together as a composite domain (record set). Using the Create Domain button, I created the following domains according to the fields in my incoming data: Name Address Suite City State Zip I added a Suite column in my domain because Melissa Data has the ability to return missing Suites based on last name or company. And that’s a great benefit of using these third party providers, as they have data that the data steward would not normally have access to. The bottom line is, with these third party data providers, I can actually improve my data. Next, I created a composite domain (fulladdress) and added the (field) domains into the composite domain. This essentially groups our address fields together in a record to facilitate the full address cleansing they perform. I then selected my newly created composite domain and under the Reference Data tab, added my third party reference data provider –Melissa Data’s Address Check- and mapped in each domain that I had to the provider’s Schema. Now that my composite domain has been married to the Reference Data service, I can take the newly published knowledge base and create a project to cleanse and enrich my data. My next task was to create a new Data Quality project, mapping in my data source and matching it to the appropriate domain column, and then kick off the verification process. It took just a few minutes with some progress indicators indicating that it was working. When the process concluded, there was a helpful set of tabs that place the response records into categories: suggested; new; invalid; corrected (automatically); and correct. Accepting the suggestions provided by  Melissa Data allowed me to clean up all the records and flag the invalid ones. It is very apparent that DQS makes address data quality simplistic for any IT professional. Final Note As I have shown, DQS makes data quality very easy. Within minutes I was able to set up a data cleansing and enrichment routine within my data quality project, and ensure that my address data was clean, verified, and standardized against real reference data. As reviewed here, it’s easy to see how both SQL Server 2012 and DQS work to take what used to require a highly skilled developer, and empower an average business or database person to consume external services and clean data. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Utility, T SQL, Technology Tagged: DQS

    Read the article

  • Can't install wine (or ia32-libs) in Ubuntu 12.10 64 bit

    - by carestad
    As already pointed out here, people seems to have issues with installing wine in the latest version of Ubuntu. I'm suspecting this only happens with 64 bit users. For example, when trying to install wine, wine1.4, wine1.4:i386, wine1.5, wine1.5:i386, ia32-libs or ia32-libs:i386 with apt-get, I get a lot of dependency errors. Doing a sudo apt-get -f install doesn't seem to do the trick, neither does using aptitude. The errors I get is normally that the packages depend on some :i386 package, but installing those manually doesn't work either because they also have dependency issues (isn't APT supposed to do this automatically?!). I also downloaded CrossOver today and tried installing the .deb manually, but the dependency issues show up there as well. When running sudo apt-get -f install after trying to install the CrossOver .deb, apt-get wants to purge the following packages: ia32-crossover intel-gpu-tools libdrm-nouveau2 libgl1-mesa-dri libva-x11-1 ubuntu-desktop vlc xorg xserver-xorg-video-ati xserver-xorg-video-intel xserver-xorg-video-modesetting xserver-xorg-video-openchrome xserver-xorg-video-radeon xserver-xorg-video-vmware What I've tried so far (and didn't work): Installing synaptic, reloading my repositories, searching for ia32 and installing ia32-libs. Using Ubuntu Software Center to install Wine and ia32-libs. Using apt-get and aptitude to install all the differend varieties of the wine packages, both with and without the :i386 and -amd64 suffixes in package names. Disabling the universe and multiverse repos, run a sudo apt-get update and then re-enable them again. Boot a newly downloaded Ubuntu 12.10 x64 live USB and try to install all the different packages there. What I haven't tried (yet): Boot a newly downloaded Ubuntu 12.10 x32 image and try to install wine there (I'm just guessing that will work). Reinstall Ubuntu. Throw my computer out a window. wine alexander@cosmo:~$ LANGUAGE=en_US sudo apt-get install wine Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: wine : Depends: wine1.5 but it is not going to be installed E: Unable to correct problems, you have held broken packages. wine-1.4 alexander@cosmo:~$ LANGUAGE=en_US sudo apt-get install wine1.4 (...) The following packages have unmet dependencies: wine1.4 : Depends: wine1.4-i386 (= 1.4.1-0ubuntu1) E: Unable to correct problems, you have held broken packages. wine-1.4:i386 alexander@cosmo:~$ LANGUAGE=en_US sudo apt-get install wine1.4:i386 (...) The following packages have unmet dependencies: libaudio2:i386 : Depends: libxt6:i386 but it is not going to be installed libqtgui4:i386 : Depends: libsm6:i386 but it is not going to be installed libunity-webapps0 : Depends: unity-webapps-service but it is not going to be installed openssh-client : Depends: adduser (>= 3.10) but it is not going to be installed Depends: passwd ssh : Depends: openssh-server wine1.4:i386 : Depends: wine1.4-i386:i386 (= 1.4.1-0ubuntu1) Depends: binfmt-support:i386 (>= 1.1.2) Depends: procps:i386 Recommends: cups-bsd:i386 Recommends: gnome-exe-thumbnailer:i386 but it is not installable or kde-runtime:i386 but it is not going to be installed Recommends: ttf-droid:i386 but it is not installable Recommends: ttf-liberation:i386 but it is not installable Recommends: ttf-mscorefonts-installer:i386 Recommends: ttf-umefont:i386 but it is not installable Recommends: ttf-unfonts-core:i386 but it is not installable Recommends: ttf-wqy-microhei:i386 but it is not installable Recommends: winbind:i386 Recommends: winetricks:i386 but it is not going to be installed Recommends: xdg-utils:i386 but it is not installable E: Error, pkgProblemResolver::Resolve generated breaks, this may be caused by held packages. wine-1.5 alexander@cosmo:~$ sudo apt-get install wine1.5 (...) The following packages have unmet dependencies: wine1.5 : Depends: wine1.5-i386 (= 1.5.16-0ubuntu1) E: Unable to correct problems, you have held broken packages. wine-1.5:i386 alexander@cosmo:~$ sudo apt-get install wine1.5:i386 (...) The following packages have unmet dependencies: libaudio2:i386 : Depends: libxt6:i386 but it is not going to be installed libqtgui4:i386 : Depends: libsm6:i386 but it is not going to be installed libunity-webapps0 : Depends: unity-webapps-service but it is not going to be installed openssh-client : Depends: adduser (>= 3.10) but it is not going to be installed Depends: passwd ssh : Depends: openssh-server wine1.5:i386 : Depends: wine1.5-i386:i386 (= 1.5.16-0ubuntu1) but it is not going to be installed Depends: binfmt-support:i386 (>= 1.1.2) Depends: procps:i386 Recommends: cups-bsd:i386 Recommends: gnome-exe-thumbnailer:i386 but it is not installable or kde-runtime:i386 but it is not going to be installed Recommends: ttf-droid:i386 but it is not installable Recommends: ttf-liberation:i386 but it is not installable Recommends: ttf-mscorefonts-installer:i386 Recommends: ttf-umefont:i386 but it is not installable Recommends: ttf-unfonts-core:i386 but it is not installable Recommends: ttf-wqy-microhei:i386 but it is not installable Recommends: winbind:i386 Recommends: winetricks:i386 but it is not going to be installed Recommends: xdg-utils:i386 but it is not installable E: Error, pkgProblemResolver::Resolve generated breaks, this may be caused by held packages. ia32-libs alexander@cosmo:~$ sudo apt-get install ia32-libs (...) The following packages have unmet dependencies: ia32-libs : Depends: ia32-libs-multiarch E: Unable to correct problems, you have held broken packages. ia32-libs:i386 alexander@cosmo:~$ sudo apt-get install ia32-libs:i386 (...) Package ia32-libs:i386 is not available, but is referred to by another package. This may mean that the package is missing, has been obsoleted, or is only available from another source However the following packages replace it: lib32z1 lib32ncurses5 lib32bz2-1.0 lib32asound2 E: Package 'ia32-libs:i386' has no installation candidate

    Read the article

  • Editing Routes in ASP.NET MVC

    - by imran_ku07
    Introduction :        Phil Haack's had written two great articles about Editable Routes, Editable Routes or Editable Routes Using App_Code.These Article are great. But if you not need to unit test your Routes and don't care about restart Application Domian during editing your Routes then global.asax file is the fastest and easiest to achieve the same. In this Article I will use Global.asax file instead of Global.asax.cs file for defining Routes and you will also see how this whole process will works.   Description :          You just need to Cut (or Copy) the code inside Global.asax.cs and paste it in Global.asax inside runat server tag.          You can simply do this by cutting the code of Global.asax.cs,          and paste it inside Global.asax,               Easy and quick ,Now you can change Global.asax without compiling the application. How this works :        I think it is worth here to see what is happening here.        Actually, ASP.NET will use Global.asax file to create a class named global_asax within ASP namespace and place all the code in Global.asax inside the class global_asax class which is created at runtime,                namespace ASP               {                    public class global_asax: NerdDinner.MvcApplication                    {                         //Any definitions defined in Global.asax like Application_Start method                                     }               }         Which inherits from class defined in Application tag,      <%@ Application Codebehind="Global.asax.cs" Inherits="NerdDinner.MvcApplication" Language="C#" %>          Actually ASP.NET creates a pool of application objects of this class, which varies from 1 to 100. Every request take one of these application objects to a serve incoming requests. After receiving an application object then it will call application specific events, like Application_Start(for only firstRequest), Application_BeginRequest(for every request), and so on. Therefore if these methods are defined in global_asax class then ASP.NET will call these method from global_asax, if not then it will use base class methods may be defined in Global.asax.cs(the concept known as shadowing or hiding). Summary :        In this article, I showed how easily and quickly you can make your Routes Editable. But also note that any change in global.asax results in Application Domain restart and this technique also makes your Route Unit Test difficult.

    Read the article

  • ASP.NET MVC 3 Hosting :: ASP.NET MVC 3 First Look

    - by mbridge
    MVC 3 View Enhancements MVC 3 introduces two improvements to the MVC view engine: - Ability to select the view engine to use. MVC 3 allows you to select from any of your  installed view engines from Visual Studio by selecting Add > View (including the newly introduced ASP.NET “Razor” engine”): - Support for the next ASP.NET “Razor” syntax. The newly previewed Razor syntax is a concise lightweight syntax. MVC 3 Control Enhancements - Global Filters: ASP.NET MVC 3  allows you to specify that a filter which applies globally to all Controllers within an app by adding it to the GlobalFilters collection.  The RegisterGlobalFilters() method is now included in the default Global.asax class template and so provides a convenient place to do this since is will then be called by the Application_Start() method: void RegisterGlobalFilters(GlobalFilterCollection filters) { filters.Add(new HandleLoggingAttribute()); filters.Add(new HandleErrorAttribute()); } void Application_Start() { RegisterGlobalFilters (GlobalFilters.Filters); } - Dynamic ViewModel Property : MVC 3 augments the ViewData API with a new “ViewModel” property on Controller which is of type “dynamic” – and therefore enables you to use the new dynamic language support in C# and VB pass ViewData items using a cleaner syntax than the current dictionary API. Public ActionResult Index() { ViewModel.Message = "Hello World"; return View(); } - New ActionResult Types : MVC 3 includes three new ActionResult types and helper methods: 1. HttpNotFoundResult – indicates that a resource which was requested by the current URL was not found. HttpNotFoundResult will return a 404 HTTP status code to the calling client. 2. PermanentRedirects – The HttpRedirectResult class contains a new Boolean “Permanent” property which is used to indicate that a permanent redirect should be done. Permanent redirects use a HTTP 301 status code.  The Controller class  includes three new methods for performing these permanent redirects: RedirectPermanent(), RedirectToRoutePermanent(), andRedirectToActionPermanent(). All  of these methods will return an instance of the HttpRedirectResult object with the Permanent property set to true. 3. HttpStatusCodeResult – used for setting an explicit response status code and its associated description. MVC 3 AJAX and JavaScript Enhancements MVC 3 ships with built-in JSON binding support which enables action methods to receive JSON-encoded data and then model-bind it to action method parameters. For example a jQuery client-side JavaScript could define a “save” event handler which will be invoked when the save button is clicked on the client. The code in the event handler then constructs a client-side JavaScript “product” object with 3 fields with their values retrieved from HTML input elements. Finally, it uses jQuery’s .ajax() method to POST a JSON based request which contains the product to a /theStore/UpdateProduct URL on the server: $('#save').click(function () { var product = { ProdName: $('#Name').val() Price: $('#Price').val(), } $.ajax({ url: '/theStore/UpdateProduct', type: "POST"; data: JSON.stringify(widget), datatype: "json", contentType: "application/json; charset=utf-8", success: function () { $('#message').html('Saved').fadeIn(), }, error: function () { $('#message').html('Error').fadeIn(), } }); return false; }); MVC will allow you to implement the /theStore/UpdateProduct URL on the server by using an action method as below. The UpdateProduct() action method will accept a strongly-typed Product object for a parameter. MVC 3 can now automatically bind an incoming JSON post value to the .NET Product type on the server without having to write any custom binding. [HttpPost] public ActionResult UpdateProduct(Product product) { // save logic here return null } MVC 3 Model Validation Enhancements MVC 3 builds on the MVC 2 model validation improvements by adding   support for several of the new validation features within the System.ComponentModel.DataAnnotations namespace in .NET 4.0: - Support for the new DataAnnotations metadata attributes like DisplayAttribute. - Support for the improvements made to the ValidationAttribute class which now supports a new IsValid overload that provides more info on  the current validation context, like what object is being validated. - Support for the new IValidatableObject interface which enables you to perform model-level validation and also provide validation error messages which are specific to the state of the overall model. MVC 3 Dependency Injection Enhancements MVC 3 includes better support for applying Dependency Injection (DI) and also integrating with Dependency Injection/IOC containers. Currently MVC 3 Preview 1 has support for DI in the below places: - Controllers (registering & injecting controller factories and injecting controllers) - Views (registering & injecting view engines, also for injecting dependencies into view pages) - Action Filters (locating and  injecting filters) And this is another important blog about Microsoft .NET and technology: - Windows 2008 Blog - SharePoint 2010 Blog - .NET 4 Blog And you can visit here if you're looking for ASP.NET MVC 3 hosting

    Read the article

  • To ref or not to ref

    - by nmarun
    So the question is what is the point of passing a reference type along with the ref keyword? I have an Employee class as below: 1: public class Employee 2: { 3: public string FirstName { get; set; } 4: public string LastName { get; set; } 5:  6: public override string ToString() 7: { 8: return string.Format("{0}-{1}", FirstName, LastName); 9: } 10: } In my calling class, I say: 1: class Program 2: { 3: static void Main() 4: { 5: Employee employee = new Employee 6: { 7: FirstName = "John", 8: LastName = "Doe" 9: }; 10: Console.WriteLine(employee); 11: CallSomeMethod(employee); 12: Console.WriteLine(employee); 13: } 14:  15: private static void CallSomeMethod(Employee employee) 16: { 17: employee.FirstName = "Smith"; 18: employee.LastName = "Doe"; 19: } 20: }   After having a look at the code, you’ll probably say, Well, an instance of a class gets passed as a reference, so any changes to the instance inside the CallSomeMethod, actually modifies the original object. Hence the output will be ‘John-Doe’ on the first call and ‘Smith-Doe’ on the second. And you’re right: So the question is what’s the use of passing this Employee parameter as a ref? 1: class Program 2: { 3: static void Main() 4: { 5: Employee employee = new Employee 6: { 7: FirstName = "John", 8: LastName = "Doe" 9: }; 10: Console.WriteLine(employee); 11: CallSomeMethod(ref employee); 12: Console.WriteLine(employee); 13: } 14:  15: private static void CallSomeMethod(ref Employee employee) 16: { 17: employee.FirstName = "Smith"; 18: employee.LastName = "Doe"; 19: } 20: } The output is still the same: Ok, so is there really a need to pass a reference type using the ref keyword? I’ll remove the ‘ref’ keyword and make one more change to the CallSomeMethod method. 1: class Program 2: { 3: static void Main() 4: { 5: Employee employee = new Employee 6: { 7: FirstName = "John", 8: LastName = "Doe" 9: }; 10: Console.WriteLine(employee); 11: CallSomeMethod(employee); 12: Console.WriteLine(employee); 13: } 14:  15: private static void CallSomeMethod(Employee employee) 16: { 17: employee = new Employee 18: { 19: FirstName = "Smith", 20: LastName = "John" 21: }; 22: } 23: } In line 17 you’ll see I’ve ‘new’d up the incoming Employee parameter and then set its properties to new values. The output tells me that the original instance of the Employee class does not change. Huh? But an instance of a class gets passed by reference, so why did the values not change on the original instance or how do I keep the two instances in-sync all the times? Aah, now here’s the answer. In order to keep the objects in sync, you pass them using the ‘ref’ keyword. 1: class Program 2: { 3: static void Main() 4: { 5: Employee employee = new Employee 6: { 7: FirstName = "John", 8: LastName = "Doe" 9: }; 10: Console.WriteLine(employee); 11: CallSomeMethod(ref employee); 12: Console.WriteLine(employee); 13: } 14:  15: private static void CallSomeMethod(ref Employee employee) 16: { 17: employee = new Employee 18: { 19: FirstName = "Smith", 20: LastName = "John" 21: }; 22: } 23: } Viola! Now, to prove it beyond doubt, I said, let me try with another reference type: string. 1: class Program 2: { 3: static void Main() 4: { 5: string name = "abc"; 6: Console.WriteLine(name); 7: CallSomeMethod(ref name); 8: Console.WriteLine(name); 9: } 10:  11: private static void CallSomeMethod(ref string name) 12: { 13: name = "def"; 14: } 15: } The output was as expected, first ‘abc’ and then ‘def’ - proves the 'ref' keyword works here as well. Now, what if I remove the ‘ref’ keyword? The output should still be the same as the above right, since string is a reference type? 1: class Program 2: { 3: static void Main() 4: { 5: string name = "abc"; 6: Console.WriteLine(name); 7: CallSomeMethod(name); 8: Console.WriteLine(name); 9: } 10:  11: private static void CallSomeMethod(string name) 12: { 13: name = "def"; 14: } 15: } Wrong, the output shows ‘abc’ printed twice. Wait a minute… now how could this be? This is because string is an immutable type. This means that any time you modify an instance of string, new memory address is allocated to the instance. The effect is similar to ‘new’ing up the Employee instance inside the CallSomeMethod in the absence of the ‘ref’ keyword. Verdict: ref key came to the rescue and saved the planet… again!

    Read the article

  • AS11 Oracle B2B Sync Support - Series 2

    - by sinkarbabu.kirubanithi
    In the earlier series, we discussed about how to model "Sync Support" in Oracle B2B. And, we haven't discussed how the response can be consumed synchronously by the back-end application or initiator of sync request. In this sequel, we will see how we can extend it to the SOA composite applications to model the end-to-end usecase, this would help the initiator of sync request to receive the response synchronously. Series 2 - is little lengthier for blog standards so be prepared before you continue further :). Let's start our discussion with a high-level scenario where one need to initiate a synchronous request and get response synchronously. There are various approaches available, we will see one simplest approach here. Components Involved: 1. Oracle B2B 2. Oracle JCA JMS Adapter 3. Oracle BPEL 4. All of the above are wrapped up in a single SOA composite application. Oracle B2B: Skipping the "Sync Support" setup part in B2B, as we have already discussed that in the earlier series 1. Here we have provided "Sync Support" samples that can be imported to B2B directly and users can start testing the same in few minutes. Initiator Sample: This requires two JMS queues to be created, one for B2B to receive initial outbound sync request and the other is for B2B to deliver the incoming sync response to the back-end. Please enable "Use JMS Id" option in both internal listening and delivery channels. This would enable JCA JMS Adapter to correlate the initial B2B request and response and in turn it would be returned as synchronous response of BPEL. Internal Listening Channel Image: Internal Delivery Channel Image: To get going without much challenges, just create queues in Weblogic with the JNDI mentioned in the above two screenshots. If you want to use different names, then you may have to change the queue jndi names in sample after importing it into B2B. Here are the Queue related JNDI names used in the sample, 1. Internal Listening Channel Queue details, Name: JNDI Name: jms/b2b/syncreplyqueue 2. Internal Delivery Channel Queue details, Name: JNDI Name: jms/b2b/syncrequestqueue Here is the Initiator Sample Acme.zip Note: You may have to adjust the ip address of GlobalChips endpoint in the Delivery Channel. Responder Sample: Contains B2B meta-data and the Callout. Just import the sample and place the callout binary under "/tmp/callout" directory. If you choose to use a different location for callout, then you may have to change the same in B2B Configuration after importing the sample. Here are the artifacts, 1. Callout Source SampleCallout.java 2. Callout Binary sample-callout.jar 3. Responder Sample GlobalChips.zip Callout Details: Just gives the static response XML that needs to be sent back as response for the inbound sync request. For a sample purpose, we have given static response but in production you may have to invoke a web service or something similar to get the response. IMPORTANT NOTE: For Sync Support use case, responder is not expected to deliver the inbound sync request to backend as the process of delivering and getting the response from backend are expected from the Callout. This default behavior can be overridden by enabling the config property "b2b.SyncAppDelivery=true" in B2B config mbean (b2b-config.xml). This makes B2B to deliver the inbound sync request to be delivered to backend queue but the response to be sent to remote caller still has to come from Callout. 2. Oracle JCA JMS Adapter: On the initiator side, we have used JCA JMS Request/Reply pattern to send/receive the synchronous message from B2B. 3. Oracle BPEL: Exposes WS-SOAP Endpoint that takes payload as input and passes the same to B2B and returns the synchronous response of B2B as SOAP response. For outside world, it looks as if it is the synchronous web service endpoint but under the cover it uses JMS to trigger/initiate B2B to send and receive the synchronous response. 4. Composite application: All the components discussed above are wired in SOA composite application that helps to model a end-to-end synchronous use case. Here's the composite application sca_B2BSyncSample_rev1.0.jar, you may just deploy this to your AS11 SOA to make use of it. For any editing, you can just import the project in your JDEV under any SOA Application. Here are the composite application screenshots, Composite Application: BPEL With JCA JMS Adapter (Request/Reply):

    Read the article

  • Retrieve Performance Data from SOA Infrastructure Database

    - by fip
    My earlier blog posting shows how to enable, retrieve and interpret BPEL engine performance statistics to aid performance troubleshooting. The strength of BPEL engine statistics at EM is its break down per request. But there are some limitations with the BPEL performance statistics mentioned in that blog posting: The statistics were stored in memory instead of being persisted. To avoid memory overflow, the data are stored to a buffer with limited size. When the statistic entries exceed the limitation, old data will be flushed out to give ways to new statistics. Therefore it can only keep the last X number of entries of data. The statistics 5 hour ago may not be there anymore. The BPEL engine performance statistics only includes latencies. It does not provide throughputs. Fortunately, Oracle SOA Suite runs with the SOA Infrastructure database and a lot of performance data are naturally persisted there. It is at a more coarse grain than the in-memory BPEL Statistics, but it does have its own strengths as it is persisted. Here I would like offer examples of some basic SQL queries you can run against the infrastructure database of Oracle SOA Suite 11G to acquire the performance statistics for a given period of time. You can run it immediately after you modify the date range to match your actual system. 1. Asynchronous/one-way messages incoming rates The following query will show number of messages sent to one-way/async BPEL processes during a given time period, organized by process names and states select composite_name composite, state, count(*) Count from dlv_message where receive_date >= to_timestamp('2012-10-24 21:00:00','YYYY-MM-DD HH24:MI:SS') and receive_date <= to_timestamp('2012-10-24 21:59:59','YYYY-MM-DD HH24:MI:SS') group by composite_name, state order by Count; 2. Throughput of BPEL process instances The following query shows the number of synchronous and asynchronous process instances created during a given time period. It list instances of all states, including the unfinished and faulted ones. The results will include all composites cross all SOA partitions select state, count(*) Count, composite_name composite, component_name,componenttype from cube_instance where creation_date >= to_timestamp('2012-10-24 21:00:00','YYYY-MM-DD HH24:MI:SS') and creation_date <= to_timestamp('2012-10-24 21:59:59','YYYY-MM-DD HH24:MI:SS') group by composite_name, component_name, componenttype order by count(*) desc; 3. Throughput and latencies of BPEL process instances This query is augmented on the previous one, providing more comprehensive information. It gives not only throughput but also the maximum, minimum and average elapse time BPEL process instances. select composite_name Composite, component_name Process, componenttype, state, count(*) Count, trunc(Max(extract(day from (modify_date-creation_date))*24*60*60 + extract(hour from (modify_date-creation_date))*60*60 + extract(minute from (modify_date-creation_date))*60 + extract(second from (modify_date-creation_date))),4) MaxTime, trunc(Min(extract(day from (modify_date-creation_date))*24*60*60 + extract(hour from (modify_date-creation_date))*60*60 + extract(minute from (modify_date-creation_date))*60 + extract(second from (modify_date-creation_date))),4) MinTime, trunc(AVG(extract(day from (modify_date-creation_date))*24*60*60 + extract(hour from (modify_date-creation_date))*60*60 + extract(minute from (modify_date-creation_date))*60 + extract(second from (modify_date-creation_date))),4) AvgTime from cube_instance where creation_date >= to_timestamp('2012-10-24 21:00:00','YYYY-MM-DD HH24:MI:SS') and creation_date <= to_timestamp('2012-10-24 21:59:59','YYYY-MM-DD HH24:MI:SS') group by composite_name, component_name, componenttype, state order by count(*) desc;   4. Combine all together Now let's combine all of these 3 queries together, and parameterize the start and end time stamps to make the script a bit more robust. The following script will prompt for the start and end time before querying against the database: accept startTime prompt 'Enter start time (YYYY-MM-DD HH24:MI:SS)' accept endTime prompt 'Enter end time (YYYY-MM-DD HH24:MI:SS)' Prompt "==== Rejected Messages ===="; REM 2012-10-24 21:00:00 REM 2012-10-24 21:59:59 select count(*), composite_dn from rejected_message where created_time >= to_timestamp('&&StartTime','YYYY-MM-DD HH24:MI:SS') and created_time <= to_timestamp('&&EndTime','YYYY-MM-DD HH24:MI:SS') group by composite_dn; Prompt " "; Prompt "==== Throughput of one-way/asynchronous messages ===="; select state, count(*) Count, composite_name composite from dlv_message where receive_date >= to_timestamp('&StartTime','YYYY-MM-DD HH24:MI:SS') and receive_date <= to_timestamp('&EndTime','YYYY-MM-DD HH24:MI:SS') group by composite_name, state order by Count; Prompt " "; Prompt "==== Throughput and latency of BPEL process instances ====" select state, count(*) Count, trunc(Max(extract(day from (modify_date-creation_date))*24*60*60 + extract(hour from (modify_date-creation_date))*60*60 + extract(minute from (modify_date-creation_date))*60 + extract(second from (modify_date-creation_date))),4) MaxTime, trunc(Min(extract(day from (modify_date-creation_date))*24*60*60 + extract(hour from (modify_date-creation_date))*60*60 + extract(minute from (modify_date-creation_date))*60 + extract(second from (modify_date-creation_date))),4) MinTime, trunc(AVG(extract(day from (modify_date-creation_date))*24*60*60 + extract(hour from (modify_date-creation_date))*60*60 + extract(minute from (modify_date-creation_date))*60 + extract(second from (modify_date-creation_date))),4) AvgTime, composite_name Composite, component_name Process, componenttype from cube_instance where creation_date >= to_timestamp('&StartTime','YYYY-MM-DD HH24:MI:SS') and creation_date <= to_timestamp('&EndTime','YYYY-MM-DD HH24:MI:SS') group by composite_name, component_name, componenttype, state order by count(*) desc;  

    Read the article

  • Code Structure / Level Design: Plants vs Zombies game level dissection

    - by lalan
    Hi Friends, I am interested in learning the class structure of Plants vs Zombies, particularly level design; for those who haven't played it - this video contains nice play-through: http://www.youtube.com/watch?v=89DfdOIJ4xw. How would I go ahead and design the code, mostly structure & classes, which allows for maximum flexibility & clean development? I am familiar with data driven design concepts, and would use events to handle most of dynamic behavior. Dissection at macro level: (Once every Level) Load tilemap, props, etc -- basically build the map (Once every Level) Camera Movement - might consider it as short cut-scene (Once every Level) Show Enemies you'll face during present level (Once every Level) Unit Selection Window/Panel - selection of defensive plants (Once every Level) Camera Movement - might consider it as short cut-scene (Once every Level) HUD Creation - based on unit selection (Level Loop) Enemy creation - based on types of zombies allowed (Level Loop) Sun/Resource generation (Level Loop) Show messages like 'huge wave of zombies coming', 'final wave' (Level Loop) Other unique events - Spawn gifts, money, tombstones, etc (Once every Level) Unlock new plant Potential game scripts: a) Level definitions: Level_1_1.xml, Level_1_2.xml, etc. Level_1_1.xml :: Sample script <map> <tilemap>tilemapFrontLawn</tilemap> <SpawnPoints> tiles where particular type of zombies (land vs water) may spawn</spawnPoints> <props> position, entity array -- lawnmower, </props> </map> <zombies> <... list of zombies who gonna attack by ids...> </zombies> <plants> <... list by plants which are available for defense by ids...> </plants> <progression> <ZombieWave name='first wave' spawnScript='zombieLightWave.lua' unlock='null'> <startMessages time=1.5>Ready</startMessages> <endMessages time=1.5>Huge wave of zombies incoming</endMessages> </ZombieWave> </progression> b) Entities definitions: .xmls containing zombies, plants, sun, lawnmower, coins, etc description. Potential classes: //LevelManager - Based on the level under play, it will load level script. Few of the // functions it may have: class LevelManager { public: bool load(string levelFileName); bool enter(); bool update(float deltatime); bool exit(); private: LevelData* mLevelData; } // LevelData - Contains the details of level loaded by LevelManager. class LevelData { private: string file; // array of camera,dialog,attackwaves, etc in active level LevelCutSceneCamera** mArrayCutSceneCamera; LevelCutSceneDialog** mArrayCutSceneDialog; LevelAttackWave** mArrayAttackWave; .... // which camera,dialog,attackwave is active in level uint mCursorCutSceneCamera; uint mCursorCutSceneDialog; uint mCursorAttackWave; public: // based on cursor, get the next camera,dialog,attackwave,etc in active level // return false/true based on failure/success bool nextCutSceneCamera(LevelCutSceneCamera**); bool nextCutSceneDialog(LevelCutSceneDialog**); } // LevelUnderPlay- LevelManager class LevelUnderPlay { private: LevelCutSceneCamera* mCutSceneCamera; LevelCutSceneDialog* mCutSceneDialog; LevelAttackWave* mAttackWave; Entities** mSelectedPlants; Entities** mAllowedZombies; bool isCutSceneCameraActive; public: bool enter(); bool update(float deltatime); bool exit(); } I am totally confused.. :( Does it make sense of using class composition (have flat class hierarchy) for managing levels. Is it a good idea to just add/remove/update sprites (or any drawable stuff) to current scene from LevelManager or LevelUnderPlay? If I want to make non-linear level design, how should I go ahead? Perhaps I would need a LevelProgression class, which would decide what to do based on decision tree. Any suggestions would be appreciated very much. Thank for your time, lalan

    Read the article

  • Allowing Access to HttpContext in WCF REST Services

    - by Rick Strahl
    If you’re building WCF REST Services you may find that WCF’s OperationContext, which provides some amount of access to Http headers on inbound and outbound messages, is pretty limited in that it doesn’t provide access to everything and sometimes in a not so convenient manner. For example accessing query string parameters explicitly is pretty painful: [OperationContract] [WebGet] public string HelloWorld() { var properties = OperationContext.Current.IncomingMessageProperties; var property = properties[HttpRequestMessageProperty.Name] as HttpRequestMessageProperty; string queryString = property.QueryString; var name = StringUtils.GetUrlEncodedKey(queryString,"Name"); return "Hello World " + name; } And that doesn’t account for the logic in GetUrlEncodedKey to retrieve the querystring value. It’s a heck of a lot easier to just do this: [OperationContract] [WebGet] public string HelloWorld() { var name = HttpContext.Current.Request.QueryString["Name"] ?? string.Empty; return "Hello World " + name; } Ok, so if you follow the REST guidelines for WCF REST you shouldn’t have to rely on reading query string parameters manually but instead rely on routing logic, but you know what: WCF REST is a PITA anyway and anything to make things a little easier is welcome. To enable the second scenario there are a couple of steps that you have to take on your service implementation and the configuration file. Add aspNetCompatibiltyEnabled in web.config Fist you need to configure the hosting environment to support ASP.NET when running WCF Service requests. This ensures that the ASP.NET pipeline is fired up and configured for every incoming request. <system.serviceModel>     <serviceHostingEnvironment aspNetCompatibilityEnabled="true" multipleSiteBindingsEnabled="true" /> </system.serviceModel> Markup your Service Implementation with AspNetCompatibilityRequirements Attribute Next you have to mark up the Service Implementation – not the contract if you’re using a separate interface!!! – with the AspNetCompatibilityRequirements attribute: [ServiceContract(Namespace = "RateTestService")] [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)] public class RestRateTestProxyService Typically you’ll want to use Allowed as the preferred option. The other options are NotAllowed and Required. Allowed will let the service run if the web.config attribute is not set. Required has to have it set. All these settings determine whether an ASP.NET host AppDomain is used for requests. Once Allowed or Required has been set on the implemented class you can make use of the ASP.NET HttpContext object. When I allow for ASP.NET compatibility in my WCF services I typically add a property that exposes the Context and Request objects a little more conveniently: public HttpContext Context { get { return HttpContext.Current; } } public HttpRequest Request { get { return HttpContext.Current.Request; } } While you can also access the Response object and write raw data to it and manipulate headers THAT is probably not such a good idea as both your code and WCF will end up writing into the output stream. However it might be useful in some situations where you need to take over output generation completely and return something completely custom. Remember though that WCF REST DOES actually support that as well with Stream responses that essentially allow you to return any kind of data to the client so using Response should really never be necessary. Should you or shouldn’t you? WCF purists will tell you never to muck with the platform specific features or the underlying protocol, and if you can avoid it you definitely should avoid it. Querystring management in particular can be handled largely with Url Routing, but there are exceptions of course. Try to use what WCF natively provides – if possible as it makes the code more portable. For example, if you do enable ASP.NET Compatibility you won’t be able to self host a WCF REST service. At the same time realize that especially in WCF REST there are number of big holes or access to some features are a royal pain and so it’s not unreasonable to access the HttpContext directly especially if it’s only for read-only access. Since everything in REST works of URLS and the HTTP protocol more control and easier access to HTTP features is a key requirement to building flexible services. It looks like vNext of the WCF REST stuff will feature many improvements along these lines with much deeper native HTTP support that is often so useful in REST applications along with much more extensibility that allows for customization of the inputs and outputs as data goes through the request pipeline. I’m looking forward to this stuff as WCF REST as it exists today still is a royal pain (in fact I’m struggling with a mysterious version conflict/crashing error on my machine that I have not been able to resolve – grrrr…).© Rick Strahl, West Wind Technologies, 2005-2011Posted in ASP.NET  AJAX  WCF  

    Read the article

  • Yet another blog about IValueConverter

    - by codingbloke
    After my previous blog on a Generic Boolean Value Converter I thought I might as well blog up another IValueConverter implementation that I use. The Generic Boolean Value Converter effectively converters an input which only has two possible values to one of two corresponding objects.  The next logical step would be to create a similar converter that can take an input which has multiple (but finite and discrete) values to one of multiple corresponding objects.  To put it more simply a Generic Enum Value Converter. Now we already have a tool that can help us in this area, the ResourceDictionary.  A simple IValueConverter implementation around it would create a StringToObjectConverter like so:- StringToObjectConverter using System; using System.Windows; using System.Windows.Data; using System.Linq; using System.Windows.Markup; namespace SilverlightApplication1 {     [ContentProperty("Items")]     public class StringToObjectConverter : IValueConverter     {         public ResourceDictionary Items { get; set; }         public string DefaultKey { get; set; }                  public StringToObjectConverter()         {             DefaultKey = "__default__";         }         public virtual object Convert(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture)         {             if (value != null && Items.Contains(value.ToString()))                 return Items[value.ToString()];             else                 return Items[DefaultKey];         }         public virtual object ConvertBack(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture)         {             return Items.FirstOrDefault(kvp => value.Equals(kvp.Value)).Key;         }     } } There are some things to note here.  The bulk of managing the relationship between an object instance and the related string key is handled by the Items property being an ResourceDictionary.  Also there is a catch all “__default__” key value which allows for only a subset of the possible input values to mapped to an object with the rest falling through to the default. We can then set one of these up in Xaml:-             <local:StringToObjectConverter x:Key="StatusToBrush">                 <ResourceDictionary>                     <SolidColorBrush Color="Red" x:Key="Overdue" />                     <SolidColorBrush Color="Orange" x:Key="Urgent" />                     <SolidColorBrush Color="Silver" x:Key="__default__" />                 </ResourceDictionary>             </local:StringToObjectConverter> You could well imagine that in the model being bound these key names would actually be members of an enum.  This still works due to the use of ToString in the Convert method.  Hence the only requirement for the incoming object is that it has a ToString implementation which generates a sensible string instead of simply the type name. I can’t imagine right now a scenario where this converter would be used in a TwoWay binding but there is no reason why it can’t.  I prefer to avoid leaving the ConvertBack throwing an exception if that can be be avoided.  Hence it just enumerates the KeyValuePair entries to find a value that matches and returns the key its mapped to. Ah but now my sense of balance is assaulted again.  Whilst StringToObjectConverter is quite happy to accept an enum type via the Convert method it returns a string from the ConvertBack method not the original input enum type that arrived in the Convert.  Now I could address this by complicating the ConvertBack method and examining the targetType parameter etc.  However I prefer to a different approach, deriving a new EnumToObjectConverter class instead. EnumToObjectConverter using System; namespace SilverlightApplication1 {     public class EnumToObjectConverter : StringToObjectConverter     {         public override object Convert(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture)         {             string key = Enum.GetName(value.GetType(), value);             return base.Convert(key, targetType, parameter, culture);         }         public override object ConvertBack(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture)         {             string key = (string)base.ConvertBack(value, typeof(String), parameter, culture);             return Enum.Parse(targetType, key, false);         }     } }   This is a more belts and braces solution with specific use of Enum.GetName and Enum.Parse.  Whilst its more explicit in that the a developer has to  choose to use it, it is only really necessary when using TwoWay binding, in OneWay binding the base StringToObjectConverter would serve just as well. The observant might note that there is actually no “Generic” aspect to this solution in the end.  The use of a ResourceDictionary eliminates the need for that.

    Read the article

  • How to install iodbc in Ubuntu 12.10

    - by Hanynowsky
    I need to install the library iodbc (depends on libodbc2) in my Quantal machine. Yet there is creepy dependency problem. This was replaced somehow by unixodbc which I don't have, installed. Here is what I get when I try to install : sudo apt-get install libiodbc2-dev Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: libiodbc2-dev : Depends: libiodbc2 (= 3.52.7-2build2) but it is not going to be installed Depends: iodbc (= 3.52.7-2build2) but it is not going to be installed E: Unable to correct problems, you have held broken packages. I can tell that iodbc conflicts with odbcinst. Yet, I cannot remove it due to the following: sudo apt-get remove odbcinst Reading package lists... Done Building dependency tree Reading state information... Done The following packages were automatically installed and are no longer required: akonadi-backend-mysql calligra-l10n-engb kde-l10n-engb kdevelop-l10n kdevelop-php-docs-l10n kdevelop-php-l10n libakonadi-kabc4 libakonadi-notes4 libakonadiprotocolinternals1 libboost-thread1.49.0 libdmtx0a libgpgme++2 libkcalcore4 libkdgantt2 libkholidays4 libkimap4 libkldap4 libkmbox4 libkmime4 libkolabxml0 libkpgp4 libkresources4 libksieve4 libprison0 libqgpgme1 libqrencode3 libxerces-c3.1 Use 'apt-get autoremove' to remove them. The following packages will be REMOVED: akonadi-server k3b k3b-i18n katepart kde-runtime kdelibs-bin kdelibs5-plugins kdepim-runtime kdepim-strigi-plugins kdepimlibs-kio-plugins kdoctools kget kmag kmail kmix kmousetool ksystemlog kubuntu-debug-installer language-pack-kde-ar language-pack-kde-en libakonadi-calendar4 libakonadi-contact4 libakonadi-kcal4 libakonadi-kde4 libakonadi-kmime4 libcalendarsupport4 libincidenceeditorsng4 libk3b6 libkabc4 libkactivities-bin libkactivities6 libkalarmcal2 libkatepartinterfaces4 libkcal4 libkcalutils4 libkcddb4 libkde3support4 libkdepim4 libkdepimdbusinterfaces4 libkdewebkit5 libkemoticons4 libkfile4 libkgapi0 libkhtml5 libkio5 libkleo4 libkmanagesieve4 libkmediaplayer4 libknewstuff3-4 libknotifyconfig4 libkolab0 libkonq-common libkonq5abi1 libkontactinterface4 libkparts4 libkpimidentities4 libkpimtextedit4 libkpimutils4 libkprintutils4 libksieveui4 libktexteditor4 libktnef4 libktorrent4 libkworkspace4abi2 libkxmlrpcclient4 libmailcommon4 libmailimporter4 libmailtransport4 libmessagecomposer4 libmessagecore4 libmessagelist4 libmessageviewer4 libmicroblog4 libnepomuk4 libnepomukcore4abi1 libnepomukquery4a libnepomuksync4 libnepomukutils4 libplasma3 libsoprano4 libtemplateparser4 libvirtodbc0 nepomuk-core odbcinst odbcinst1debian2 plasma-scriptengine-javascript qapt-batch soprano-daemon virtuoso-minimal 0 upgraded, 0 newly installed, 89 to remove and 0 not upgraded. After this operation, 126 MB disk space will be freed. Do you want to continue [Y/n]? Other info: Why do the "iodbc" and "libmyodbc" packages conflict with each other? The root problem is that I need to use a new feature introduced in the latest MySQL Workbench builds (DB Migration) which uses odbc for that matter. Here is what Mysql doc says about it : Linux The Migration Wizard uses iODBC as a driver manager for all of its ODBC connections in Linux. This may give you some troubles because most Linux distributions provide ODBC drivers compiled against unixODBC. This is another driver manager not supported by MySQL Workbench so you won’t be able to use those drivers unless you compile them against iODBC. Here’s what you should do. Make sure that you have iODBC installed. If you are using Debian, Ubuntu or another Debian based distro, type this command in your terminal: $ sudo apt-get install iodbc libiodbc2-dev libpq-dev libssl-dev For RPM based distros (RedHat, Fedora, etc.) type this command instead of the previous one: $ sudo yum install iodbc iodbc-dev libpqxx-devel openssl-devel Now we need to install the PostgreSQL ODBC drivers. Download the psqlODBC source tarball from here. Use the latest version available for download. As of this writing the latest version corresponds to the file psqlodbc-09.01.0200.tar.gz. Extract this tarball to a directory in your hard drive and open a terminal and cd into that directory. Type this in the terminal window: $ ./configure --with-iodbc --enable-pthreads $ make $ sudo make install Verify that you have the file psqlodbcw.so in the /usr/local/lib directory. This package seems to pose the problem probably: dpkg -s odbcinst1debian2 Package: odbcinst1debian2 Status: install ok installed Priority: optional Section: libs Installed-Size: 241 Maintainer: Ubuntu Developers <[email protected]> Architecture: amd64 Multi-Arch: same Source: unixodbc Version: 2.2.14p2-5ubuntu4 Replaces: unixodbc (<< 2.1.1-2) Depends: libc6 (>= 2.14), libltdl7 (>= 2.4.2), odbcinst Pre-Depends: multiarch-support Breaks: libiodbc2, libmyodbc (<< 5.1.6-2), odbc-postgresql (<< 1:09.00.0310-1.1), tdsodbc (<< 0.82-8) Conflicts: odbcinst1, odbcinst1debian1 Description: Support library for accessing odbc ini files This package contains the libodbcinst library from unixodbc, a library used by ODBC drivers for reading their configuration settings from /etc/odbc.ini and ~/.odbc.ini. . Also contained in this package are the driver setup plugins, which describe the features supported by individual ODBC drivers. Homepage: http://www.unixodbc.org/ Original-Maintainer: Steve Langasek <[email protected]> I have no broken packages:

    Read the article

< Previous Page | 60 61 62 63 64 65 66 67 68 69 70 71  | Next Page >