Search Results

Search found 2252 results on 91 pages for 'facing'.

Page 80/91 | < Previous Page | 76 77 78 79 80 81 82 83 84 85 86 87  | Next Page >

  • backup of KVM VM's running on Ubuntu 12.4.1 precise edition from a remote machine

    - by Dr. Death
    I am creating a library API which will take the backup of all the VM's running on KVM hypervisor. My VM's can be of any type. I am taking this backup from a remote machine and need to put the backup at remote server. I have KVM, Libvirt installed on my system. Some of my VM's are LVM based and some are normal VM's running on KVM. I research and found out an excellent perl script for taking the backup http://pof.eslack.org/2010/12/23/best-solution-to-fully-backup-kvm-virtual-machines/ but since I am developing this library in C++ I cannot use it however it has given me a good understanding of how it will work. One thing I didnot able to sort out is if my VM's are not created using virt-manager or are created using any other tool them virsh system list command does not give them in the list of running VM's however they are running perfectly on my KVM server. Is there a way to list these VM's in my system list anyhow? secondly, when I am taking backup from the remote machine I am getting out of my ssh mode as soon as my libvirt command finishes and for every command I need to ssh again, Is there a way that I do not need to ssh each and every time? I have already used the rsa key for ssh but when once my command finishes my control moves to the remote machine again and try to find out my source VM location in remote machine's local drives which in turn fails it. here is the main problem I am facing. also for the LVM based VM I am able to take the live backup but for non LVM based my machines are getting suspended and not been able to take the live backup. Since my library will work on the remote machine only I might not know the VM's configruation on the KVM server. so need to make it consistent for all the VM's. Please share any thing related to this issue so that I may be able to take the live backup of the non lvm vm's also. I'll update my working and any research findings time to time to all of you. Thanks in advance for your suggestions in these regards.

    Read the article

  • What "pieces" are needed in order to set up a cluster of physical servers?

    - by Chris Dutrow
    Background: Currently, we use Rackspace cloud servers. We have no intention to stop using them, but would like to look into setting up a cluster of physical servers (probably desktop computers in the $400 range with 8gb memory each) to offset some of our load and work as a secondary, more powerful, less reliable system. To put things in perspective, we can buy comparable desktop computers for the same price as we pay in one month to rent them on Rackspace Cloud. I understand that this is generally a dumb idea. However, in this particular instance, the server cluster is needed for its computation power. It is not mission-critical, it does not host a consumer-facing website, and if it goes down for a day or two, its not really a problem. Currently, we have access to business class verizon fios. If I understand correctly, we can get at least 25 dedicated IP addresses with this service, this should be enough. Requirements: Each server runs Linux Centos 6.3 Some of the servers run Python and execute processes from a task queue (Redis or RabbitMQ) Some of the servers are capable of serving static files and Python driven REST APIs Some of the servers host a Cassandra database cluster One or more of the servers are a Redis database servers One or more of the servers are PostgreSQL servers Questions: What kind of router or switch is needed? We would like the computers to be able to communicate effectively with each other via internal IP addresses. This is especially important for communicating with servers hosting Redis that need to be able to respond to requests very quickly. Are there special switches or routers that need to be used to connect the servers together? Are Desktop computers ok for this? We have found that we are mostly RAM-bottle necked, I understand that some servers have highly superior CPUs, but I'm not sure we need CPU power as much as we need RAM, which is cheap in Desktop computers. Will we have problems with the WIFI cards in the desktops or any other unexpected hardware limitation? What tools should be used to "image" the servers. For example, when we get an installation right for a Redis server or Cassandra node, are there tools that come with Linux Centos 6.3 to image the server to a USB drive or something like that? Or do we need to use some other software for this? What other things are we missing that we should be concerned about? Thanks so much!

    Read the article

  • How to have Windows Server DNS use hosts file to resolve specific host names

    - by user41079
    Hello, everyone, I'm facing a small problem with Windows Server 2003 DNS service. In my corporation, I'm running Microsoft DNS server(172.16.0.12) to do name resolution to my company intranet(domain name ends in dev.nls. resolving to IP 172.16..), and it is also configured as a DNS forwarder to forward other domain names(e.g. *.google.com , *.sf.net) to Internet real DNS servers. This internal DNS server never tends to serve users from outside world. And, we are running a mail server(serving incoming mail for a real Internet domain @nlscan.com) inside company firewall which can be accessed in either way: by connecting to 172.16.0.10 from within intranet. by connecting to mail.nlscan.com(resolved to 202.101.116.9) from Internet. Note that 172.16.0.10 and 202.101.116.9 is not the same physical machine. The 202 one is a firewall machine who do port forwarding of port 25 and 110 to intranet address 172.16.0.10 . Now my question: If users inside corporate LAN want to resolve mail.nlscan.com, it resolves to 202.101.116.9. That's correct and workable, BUT NOT GOOD, because the mail traffic goes to the firewall machine then bounces to 172.16.0.10 . I hope that our internal DNS server can intercept the name mail.nlscan.com and resolve it to 172.16.0.10 . So, I hope that I can write an entry in "hosts" file on 172.16.0.12 to do this. But, how can Microsoft DNS server recognize this "hosts" file? Maybe you suggest, why not have intranet user use 172.16.0.10 to access my mail server? I have to say it is inconvenient, suppose a user(employee) works on his laptop, daytime in office and nighttime at home. When he is at home, he cannot use 172.16.0.10 . Creating a zone for nlscan.com on our internal DNS server is not feasible, because the name server for nlscan.com domain is on our ISP, and it is responsible for resolving other host names and sub-domains under nlscan.com . Thank you in advance.

    Read the article

  • Upgrading TFS 2005 to TFS 2010 fails at "Executing servicing step Upgrade Version Control Identities"

    - by nadeemmar
    Hi all, I have been trying to upgrade our TFS 2005 to TFS 2010 but with no luck so far. I went through the TFS Installation guide and many upgrade guides but with no luck in overcoming the issue I am facing which seems to be unique and different to other described issues. In our company, we have a domain forest with several domains. Lets say domain A, B, and C. TFS is in domain A and has users from all these three domains. All domains have trust reltionships between them. However, domain C was deleted several months ago. In the upgrade process, whenever I reach the collection upgrade step, the following error is raised: [Info @09:57:50.997] [2010-12-29 09:55:47Z][Informational] Step Data: ExtensionType = Microsoft.TeamFoundation.VersionControl.Server.PlugIns.WorkspaceSecurityNamespaceExtension [Info @09:57:50.997] [2010-12-29 09:55:47Z] Servicing step Create VersionControl Security Namespaces passed. (ServicingOperation: UpgradePreTfs2010Databases; Step group: Upgrade.TfsVersionControl) [Info @09:57:50.997] [2010-12-29 09:55:47Z] Executing servicing step Upgrade Version Control Identities. (ServicingOperation: UpgradePreTfs2010Databases; Step group: Upgrade.TfsVersionControl) [Info @09:57:50.997] [2010-12-29 09:55:47Z][Informational] Step Performer: VersionControl [Info @09:57:50.997] [2010-12-29 09:55:47Z][Informational] Step Type: UpgradeIdentity [Info @09:57:50.997] [2010-12-29 09:55:47Z][Informational] Step Data Text: [Error @09:57:50.997] [2010-12-29 09:55:51Z][Error] Sync error for identity: System.Security.Principal.WindowsIdentity, S-1-5-21-1004336348-527237240-682003330-2818 - The trust relationship between the primary domain and the trusted domain failed I looked for the SID and it seems to be for a user in the deleted domain C. With a bit of googling, I figured out that TFSConfig Identities command can be used to remap users from one domain to the other. I went ahead and created local users that matches the users we have from domain C and ran the TFSConfig Identities /Change command and it executed successfully. However, I still get the same error. I am stuck and can't figure out how to move forward :( I need your expertise, has anyone faced this issue before? Do I need to change these identities on TFS 2005 before I commence the upgrade? I forgot to mention, I am following the upgrade with a move approach. I created a virtual machine for testing the upgrade. Installed SQL server 2008, restored the TFS databases and installed TFS 2010 and ran the upgrade wizard. Regards, Nadeem

    Read the article

  • DHCP server with multiple interfaces on ubuntu, destroys default gateway

    - by Henrik Kjus Alstad
    I use Ubuntu, and I have many interfaces. eth0, which is my internet connection, and it gets its info from a DHCP-server totally outisde of my control. I then have eth1,eth2,eth3 and eth4 which I have created a DHCP-server for.(ISC DHCP-Server) It seems to work, and I even get an IP-address from the foreign DHCP-server on the internet facing interface. However, for some reason it seems my gateway for eth0 became screwed after I installed my local DHCP-server for eth1-eth4. (I think so because I got an IP for eth0, and I can ping other stuff on the local network, but I cannot get access to the internet). My eth0-specific info in /etc/network/interfaces: auto lo iface lo inet loopback auto eth0 iface eth0 inet dhcp auto eth1 iface eth1 inet static address 10.0.1.1 netmask 255.255.255.0 network 10.0.1.0 broadcast 10.0.1.255 gateway 10.0.1.1 mtu 8192 auto eth2 iface eth2 inet static address 10.0.2.1 netmask 255.255.255.0 network 10.0.2.0 broadcast 10.0.2.255 gateway 10.0.2.1 mtu 8192 My /etc/default/isc-dhcp-server: INTERFACES="eth1 eth2 eth3 eth4" So why does my local DHCP-server fuck up the gateway for eth0, when I tell it not to listen to eth0? Anyone see the problem or what I can do to fix it? The problem seems indeed to be the gateways. "netstat -nr" gives: 0.0.0.0 --- 10.X.X.X ---- 0.0.0.0 --- UG 0 0 0 eth3 It should have been 0.0.0.0 129.2XX.X.X 0.0.0.0 UG 0 0 0 eth0 So for some reason, my local DHCP-server overrides the gateway I get from the network DHCP. Edit: dhcp.conf looks like this(I included info only for eth1 subnet): ddns-update-style none; not authoritative; subnet 10.0.1.0 netmask 255.255.255.0 { interface eth1; option domain-name "example.org"; option domain-name-servers ns1.example.org, ns2.example.org; default-lease-time 600; max-lease-time 7200; range 10.0.1.10 10.0.1.100; host camera1_1 { hardware ethernet 00:30:53:11:24:6E; fixed-address 10.0.1.10; } host camera2_1 { hardware ethernet 00:30:53:10:16:70; fixed-address 10.0.1.11; } } Also, it seems that the gateway is correctly set if I run "/etc/init.d/networking restart" in a terminal, but that's not helpful for me, I need the correct gateway to be set during startup, and i'd rather find the source of the problem

    Read the article

  • Asterisk Outgoing CDR Logging To Mysql

    - by user3295551
    Trying to utilize the cdr logging (to mysql) using custom fields. The problem I am facing is only when an outbound call is placed, during inbound calls the custom field I am able to log no problem. The reason I am having an issue is because the custom cdr field I need is a unique value for each user on the system. sip.conf ... ... [sales_department](!) type=friend host=dynamic context=SalesAgents disallow=all allow=ulaw allow=alaw qualify=yes qualifyfreq=30 ;; company sales agents: [11](sales_agent) secret=xxxxxx callerid="<...>" [12](sales_agent) secret=xxxxxx callerid="<...>" [13](sales_agent) secret=xxxxxx callerid="<...>" [14](sales_agent) secret=xxxxxx callerid="<...>" extensions.conf [SalesAgents] include => Services ; Outbound calls exten=>_1NXXNXXXXXX,1,Dial(SIP/${EXTEN}@myprovider) ; Inbound calls exten=>100,1,NoOp() same => n,Set(CDR(agent_id)=11) same => n,CELGenUserEvent(Custom Event) same => n,Dial(${11_1},25) same => n,GotoIf($["${DIALSTATUS}" = "BUSY"]?busy:unavail) same => n(unavail),VoiceMail(11@asterisk) same => n,Hangup() same => n(busy),VoiceMail(11@asterisk) same => n,Hangup() exten=>101,1,NoOp() same => n,Set(CDR(agent_id)=12) same => n,CELGenUserEvent(Custom Event) same => n,Dial(${12_1},25) same => n,GotoIf($["${DIALSTATUS}" = "BUSY"]?busy:unavail) same => n(unavail),VoiceMail(12@asterisk) same => n,Hangup() same => n(busy),VoiceMail(12@asterisk) same => n,Hangup() ... ... For the inbound section of the dialplan in the above example I am able to insert the custom cdr field (agent_id). But above it you can see for the Oubound section of the dialplan I have been stumped on how I would be able to tell the dialplan which agent_id is making the outbound call. My Question: how to take the agent_id=[11] & agent_id=[12] and agent_id=[13] and agent_id=[14] etc and use that as a custom field for cdr on outbound calls?

    Read the article

  • How can Windows XP/7 users cleanly connect to Mac OS X Server 10.9.4 Mavericks with Active Directory integration?

    - by JakeGould
    I’m a Linux/Unix systems admin who also manages a Macintosh server infrastructure & there is a lone Mac Mini in the mix running 10.9.4 that I would like Windows XP & Windows 7 users to connect to with little or no hassle. The problem? Windows users can’t seem to even get to the point of a password prompt yet connect. Mind you this server replaced a Mac OS X 10.6.8 server that had issues, but never had issues with Windows users connected. The gist of this post is: The tons of different messages out there about Mac OS X 10.9.4 Samba support are mind-numbingly confusing. Can anyone share some solid specifics here? I’ve read pieces like this one here that suggest turning off file sharing & then adding a share with AFP/SMB enabled would work. But the suggestion seems to apply to 10.8. And from what I know a lot has changed in Samba support in 10.9 let alone the iterations to 10.9.4. Then I found this great tutorial here that explains things step-by-step. Which seems like it should work, but the problem is the example given applies to a local user created on the Mac when I would like users in an Active Directory group—which the Mac is bound to—access the Mac Mini shares. There are also tons of great tips here on MacWindows.com but nothing seems solid to the issue I am facing. So from what I am reading these are my options: Local User Versus Active Directory: Setup a common local user on the Mac OS X 10.9.4 server to be used for Samba sharing since Active Directory won’t work. Is this really the case? Because loss of AD integration is a major pain. Do Extended File Attributes Get Retained from Windows Users: If this were to work, how do extended attributes come into play? Loss of metadata & related info is not an option. How Fragile is Any of this to Updates: How does any of this shake out with Mac OS X updates as well as Windows updates? Installing Official, Open Source Samba: Would upgrading the Samba install on the server to the official open source Samba via a package like SMBUp or via the Hombrew method described here help or make the issue worse? I fully understand there have historically been issues in mixed environments, but nowadays Windows users connecting to a Mac seem to have a truly hellish road ahead of them. Unless I am missing something?

    Read the article

  • Proxy / Squid 2.7 / Debian Wheezy 6.7 / lots of TCP Timed-out

    - by Maroon Ibrahim
    i'm facing a lot of TCP timed-out on a busy cache server and here below my sysctl.conf configuration as well as an output of "netstat -st" Kernel 3.2.0-4-amd64 #1 SMP Debian 3.2.57-3 x86_64 GNU/Linux Any advice or help would be highly appreciated #################### Sysctl.conf cat /etc/sysctl.conf net.ipv4.tcp_tw_reuse = 1 net.ipv4.tcp_tw_recycle = 1 fs.file-max = 65536 net.ipv4.tcp_low_latency = 1 net.core.wmem_max = 8388608 net.core.rmem_max = 8388608 net.ipv4.ip_local_port_range = 1024 65000 fs.aio-max-nr = 131072 net.ipv4.tcp_fin_timeout = 10 net.ipv4.tcp_keepalive_time = 60 net.ipv4.tcp_keepalive_intvl = 10 net.ipv4.tcp_keepalive_probes = 3 kernel.threads-max = 131072 kernel.msgmax = 32768 kernel.msgmni = 64 kernel.msgmnb = 65536 kernel.shmmax = 68719476736 kernel.shmall = 4294967296 net.ipv4.ip_forward = 1 net.ipv4.tcp_timestamps = 0 net.ipv4.conf.all.accept_redirects = 0 net.ipv4.tcp_window_scaling = 1 net.ipv4.tcp_sack = 0 net.ipv4.tcp_syncookies = 1 net.ipv4.ip_dynaddr = 1 vm.swappiness = 0 vm.drop_caches = 3 net.ipv4.tcp_moderate_rcvbuf = 1 net.ipv4.tcp_no_metrics_save = 1 net.ipv4.tcp_ecn = 0 net.ipv4.tcp_max_orphans = 131072 net.ipv4.tcp_orphan_retries = 1 net.ipv4.conf.default.rp_filter = 0 net.ipv4.conf.default.accept_source_route = 0 net.ipv4.tcp_max_syn_backlog = 32768 net.core.netdev_max_backlog = 131072 net.ipv4.tcp_mem = 6085248 16227328 67108864 net.ipv4.tcp_wmem = 4096 131072 33554432 net.ipv4.tcp_rmem = 4096 174760 33554432 net.core.rmem_default = 33554432 net.core.rmem_max = 33554432 net.core.wmem_default = 33554432 net.core.wmem_max = 33554432 net.core.somaxconn = 10000 # ################ Netstat results /# netstat -st IcmpMsg: InType0: 2 InType3: 233754 InType8: 56251 InType11: 23192 OutType0: 56251 OutType3: 437 OutType8: 4 Tcp: 20680741 active connections openings 63642431 passive connection openings 1126690 failed connection attempts 2093143 connection resets received 13059 connections established 2649651696 segments received 2195445642 segments send out 183401499 segments retransmited 38299 bad segments received. 14648899 resets sent UdpLite: TcpExt: 507 SYN cookies sent 178 SYN cookies received 1376771 invalid SYN cookies received 1014577 resets received for embryonic SYN_RECV sockets 4530970 packets pruned from receive queue because of socket buffer overrun 7233 packets pruned from receive queue 688 packets dropped from out-of-order queue because of socket buffer overrun 12445 ICMP packets dropped because they were out-of-window 446 ICMP packets dropped because socket was locked 33812202 TCP sockets finished time wait in fast timer 622 TCP sockets finished time wait in slow timer 573656 packets rejects in established connections because of timestamp 133357718 delayed acks sent 23593 delayed acks further delayed because of locked socket Quick ack mode was activated 21288857 times 839 times the listen queue of a socket overflowed 839 SYNs to LISTEN sockets dropped 41 packets directly queued to recvmsg prequeue. 79166 bytes directly in process context from backlog 24 bytes directly received in process context from prequeue 2713742130 packet headers predicted 84 packets header predicted and directly queued to user 1925423249 acknowledgments not containing data payload received 877898013 predicted acknowledgments 16449673 times recovered from packet loss due to fast retransmit 17687820 times recovered from packet loss by selective acknowledgements 5047 bad SACK blocks received Detected reordering 11 times using FACK Detected reordering 1778091 times using SACK Detected reordering 97955 times using reno fast retransmit Detected reordering 280414 times using time stamp 839369 congestion windows fully recovered without slow start 4173098 congestion windows partially recovered using Hoe heuristic 305254 congestion windows recovered without slow start by DSACK 933682 congestion windows recovered without slow start after partial ack 77828 TCP data loss events TCPLostRetransmit: 5066 2618430 timeouts after reno fast retransmit 2927294 timeouts after SACK recovery 3059394 timeouts in loss state 75953830 fast retransmits 11929429 forward retransmits 51963833 retransmits in slow start 19418337 other TCP timeouts 2330398 classic Reno fast retransmits failed 2177787 SACK retransmits failed 742371590 packets collapsed in receive queue due to low socket buffer 13595689 DSACKs sent for old packets 50523 DSACKs sent for out of order packets 4658236 DSACKs received 175441 DSACKs for out of order packets received 880664 connections reset due to unexpected data 346356 connections reset due to early user close 2364841 connections aborted due to timeout TCPSACKDiscard: 1590 TCPDSACKIgnoredOld: 241849 TCPDSACKIgnoredNoUndo: 1636687 TCPSpuriousRTOs: 766073 TCPSackShifted: 74562088 TCPSackMerged: 169015212 TCPSackShiftFallback: 78391303 TCPBacklogDrop: 29 TCPReqQFullDoCookies: 507 TCPChallengeACK: 424921 TCPSYNChallenge: 170388 IpExt: InBcastPkts: 351510 InOctets: -609466797 OutOctets: -1057794685 InBcastOctets: 75631402 #

    Read the article

  • Moving from single-site to multi-site Active Directory has broken OWA proxying

    - by messick
    Originally we had the following setup: OfficeExch01 has Mailbox Role and CAS Role OfficeExch01 is in the office. CoLoExch01 had just CAS Role. CoLoExch01 is internet facing and in a CoLo. Three AD domain controllers in the default site. Users could go to https://webmail.whatever.com/owa, get proxyed to OfficeExch01 and everything was great. Well, we recently setup a separate AD site and put a domain controller and the ColoExch01 server in the new site. I also made that remote DC be a Global Catalog. Now, users get the following error: Outlook Web Access is not available. If the problem continues, contact technical support for your organization and tell them the following: There is no Microsoft Exchange Client Access server that has the necessary configuration in the Active Directory site where the mailbox is stored. I also see event 41 errors in the logs: The Client Access server "https://webmail.xxxxxxx.com/owa" attempted to proxy Outlook Web Access traffic for mailbox "/o=XXXXX/ou=Exchange Administrative Group (FYDIBOHF23SPDLT)/cn=Recipients/cn=xxxxxxk". This failed because no Client Access server with an Outlook Web Access virtual directory configured for Kerberos authentication could be found in the Active Directory site of the mailbox. The simplest way to configure an Outlook Web Access virtual directory for Kerberos authentication is to set it to use Integrated Windows authentication by using the Set-OwaVirtualDirectory cmdlet in the Exchange Management Shell, or by using the Exchange Management Console. If you already have a Client Access server deployed in the target Active Directory site with an Outlook Web Access virtual directory configured for Kerberos authentication, the proxying Client Access server may not be finding that target Client Access server because it does not have an internalUrl parameter configured. You can configure the internalUrl parameter for the Outlook Web Access virtual directory on the Client Access server in the target Active Directory site by using the Set-OwaVirtualDirectory cmdlet. Looking this up I see a lot talk about ExternalURL and InternalURL settings. However, everything worked great until we made the new AD site. I also made sure the internal CAS server's /owa virtual directory is set to use Integrated Authentication. Is there something I need to do to allow Exchange to see that I've made these AD changes?

    Read the article

  • Interop.Outlook.UserProperties.Add causing problem during connection time

    - by aanataliya
    Hi All, I have created a plug-in for outlook. Plug-in has only below code. private void OnNewOutlookInspector(Outlook.Inspector OutlookInsptr) { Outlook.MailItem MlItem = (Outlook.MailItem)OutlookInsptr.CurrentItem; //if I remove below line. Everything is working fine. MlItem.UserProperties.Add("INSPINIT", Outlook.OlUserPropertyType.olText , true , true ).Value = "1"; } public void OnConnection(object application, Extensibility.ext_ConnectMode connectMode, object addInInst, ref System.Array custom) { applicationObject = application; addInInstance = addInInst; MessageBox.Show("in connection new 2"); OutlkApp = (Outlook.Application)application; OutlkInsptrs = OutlkApp.Inspectors; OutlkInsptrs.NewInspector += new Outlook.InspectorsEvents_NewInspectorEventHandler(OnNewOutlookInspector); } Problem I am facing is, When I send HTML mail while plug-in is enabled, receiving end it is being received as a plain text. Below is the mail content along with the header and body at recieving end. x-sender: [email protected] x-receiver: [email protected] Received: from blr-s-07.pointcrossblr.com ([192.168.1.107]) by blr-ws-134.pointcrossblr.com with Microsoft SMTPSVC(6.0.2600.5949); Wed, 22 Dec 2010 17:11:02 +0530 Received: from blrws134 ([192.168.1.175]) by blr-s-07.pointcrossblr.com with Microsoft SMTPSVC(6.0.3790.4675); Wed, 22 Dec 2010 17:11:02 +0530 From: "Ashif Nataliya" <[email protected]> To: <[email protected]> Cc: <[email protected]> Subject: RTF FRM blr to pc.com cc blr-ws-134 Date: Wed, 22 Dec 2010 17:11:02 +0530 Message-ID: <[email protected]> MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="----=_NextPart_000_00F7_01CBA1FB.36115580" X-Mailer: Microsoft Outlook 14.0 Content-Language: en-us X-MS-TNEF-Correlator: 00000000DCB2344DE8F50F4FBC91085BB5C06D55A4172000 thread-index: AcuhzRuTOBkvHPUnS1aLi9+cHNAWhA== Return-Path: [email protected] X-OriginalArrivalTime: 22 Dec 2010 11:41:02.0822 (UTC) FILETIME=[1C788860:01CBA1CD] This is a multipart message in MIME format. ------=_NextPart_000_00F7_01CBA1FB.36115580 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit HTML Test Test Mail ------=_NextPart_000_00F7_01CBA1FB.36115580 Content-Type: application/ms-tnef; name="winmail.dat" Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="winmail.dat" // and some other code..... Any help is appreciated. Thanks.

    Read the article

  • What's the best way to do user profile/folder redirect/home directory archiving?

    - by tpederson
    My company is in dire need of a redesign around how we handle user account administration. I've been tasked with automating the process. The end goal is to have the whole works triggered by the business, and IT only looking in when there's an error reported. The interim phase is going to be semi-manual. That is a level 2 tech inputs the user's info and supervises the process. The current hurdle I'm facing is user profile archiving. Our security team requires us to archive the profile directories for any terminated user for 60 days in case the legal team requires access to their files. Our AD is as much a mess as everything else, so there are some users with home directories and some with profiles. Anyone who has a profile dir in AD also has a good deal of their profile redirected to our file servers over DFS. In order to complete the process manually you find the user in AD, disable them, find their home/profile dir, go there and take ownership, create an archive folder, move all their files over, then delete the old dir. Some users have many many gigs of nonsense and this can take quite some time. Even automated the process would not be a quick one. I'm thinking that I need to have a client side C# GUI for the quick stuff and some server side batch script or console app to offload this long running process. I have a batch script that works decently using takeown and robocopy, but I wonder if a C# console app would do a better job. So, my question at long last is, what do you think is the best way to handle this? I can't imagine this is a unique problem, how do other admins get this done? The last place I worked was easily 10x larger than the place I'm in now. If we would have been doing this manual crap there, they'd have needed a team of at least 30 full time workers to keep up. I have decent skills in C#.net and batch scripting, but am a quick study and I have used most every language once or twice. Thank you for reading this and I look forward to seeing what imaginative solutions you all can come up with.

    Read the article

  • Building an SSL server farm

    - by dan
    I'm interested in building the the architecture in the article referenced below. I currently have a modestly-priced layer-4 load balancer and my application servers are the SSL endpoints. I want to put an SSL server farm in between my load balancer and my app servers. Then I will put another inexpensive load balancer between the SSL farm and my app servers, to do layer-7 routing. My web application has a fairly high amount of consumer traffic, that 6 servers can handle at about 50% capacity. Additionally, I have infrastructure traffic that is several orders of magnitude heavier than my consumer traffic. This is data coming in from all over the world that must integrate with my web application in real time. In total I have 18 app servers to handle all the traffic, plus 6 database servers. I will be adding 6 more app servers over the next 2 weeks and another 6 the 2 weeks after that. Conservatively, I estimate I will need to scale to 120 servers by the end of the year. My motivation right now is to separate the consumer traffic from the infrastructure traffic. The consumer traffic is higher priority than the infrastructure traffic and I cannot allow a stampede on the infrastructure side to take down my consumer-facing servers. Having a website that is always up is the top priority. However if there is a failure in one of the consumer app servers, I want to route that traffic to the servers designated for infrastructure traffic. The complication is that all the traffic is addressed using the same hostname and is nearly 100% https. The only way in my case to distinguish infrastructure from consumer traffic is by URL (poor architecture I inherited), so I need a layer 7 load balancer to be able to route. However for that to work I need either a fancy hardware-based SSL terminator or an SSL server farm as described above. Because my user base is rapidly scaling, I worry that if I go down the hardware path it will become very expensive very fast, especially since I will need 4 of everything for high availability (2 identical setups in 2 facilities). Meanwhile, the above diagram seems very flexible and more horizontally scalable. Has anyone built this before? Are there pre-built configurations? What considerations should I make and what software should I use (I've heard of people using apache with mod-ssl, nginx, and stunnel)? Also, when does it make sense to buy an expensive load balancer vs building an SSL server farm? http://1wt.eu/articles/2006_lb/index_05.html

    Read the article

  • hosting 2 webapps under 1 apache/tomcat

    - by mkoryak
    I am trying to host multiple webapps under tomcat 6 behind apache2 via mod_jk. I am at my wits end with this. the problem i am facing that both domains seems to point to a single tomcat 'domain'. my server.xml looks like this: <Service name="Catalina"> <Connector port="8080" protocol="HTTP/1.1" connectionTimeout="20000" URIEncoding="UTF-8" redirectPort="8443" /> <Connector port="8009" protocol="AJP/1.3" redirectPort="8443" /> <Connector port="8010" protocol="AJP/1.3" redirectPort="8443" /> <Engine name="Catalina" defaultHost="dogself.com"> <Realm className="org.apache.catalina.realm.UserDatabaseRealm" resourceName="UserDatabase"/> <Host name="dogself.com" appBase="webapps-dogself" unpackWARs="true" autoDeploy="true" xmlValidation="false" xmlNamespaceAware="false"> </Host> <Host name="natashacarter.com" appBase="webapps-natashacarter.com" unpackWARs="true" autoDeploy="true" xmlValidation="false" xmlNamespaceAware="false"> </Host> </Engine> </Service> my workers.properties looks like this: worker.list=dogself,natashacarter worker.dogself.port=8009 worker.dogself.host=dogself.com worker.dogself.type=ajp13 worker.natashacarter.port=8010 worker.natashacarter.host=natashacarter.com worker.natashacarter.type=ajp13 finally my apache vhosts look like this: <VirtualHost 69.164.218.75:80> ServerName dogself.com DocumentRoot /srv/www/dogself.com/public_html/ ErrorLog /srv/www/dogself.com/logs/error.log CustomLog /srv/www/dogself.com/logs/access.log combined JkMount /* dogself </VirtualHost> and <VirtualHost 69.164.218.75:80> ServerName natashacarter.com DocumentRoot /srv/www/dogself.com/public_html/ ErrorLog /srv/www/dogself.com/logs/error.log CustomLog /srv/www/dogself.com/logs/access.log combined JkMount /* natashacarter </VirtualHost> when i log into manager webapp on both dogself.com and natashacarter.com, i can deploy to a context path on dogself, and that same contextpath will appear on natashacarter - so i know for a fact that this is the same tomcat domain. edit: just found this in my mod_jk log [Sun Feb 20 21:15:43 2011] [28546:3075521168] [warn] map_uri_to_worker_ext::jk_uri_worker_map.c (962): Uri * is invalid. Uri must start with / [Sun Feb 20 21:16:44 2011] [28548:3075521168] [info] ajp_send_request::jk_ajp_common.c (1496): (dogself) all endpoints are disconnected, detected by connect check (1), cping (0), send (0) but not sure why dogself wouldnt respond please help a brother out

    Read the article

  • Clear OS always showing "Operation too slow. Less than 1 bytes/sec"

    - by Blue Gene
    Have been trying to install clear os addon but nothing is working as i am facing this error on every mirror in the .repo file. Yum install squid http://mirror2-dallas.clearsdn.com/clearos/core/6/x86_64/repodata/primary.sqlite.bz2: [Errno 12] Timeout on http://mirror2-dallas.clearsdn.com/clearos/core/6/x86_64/repodata/primary.sqlite.bz2: (28, 'Operation too slow. Less than 1 bytes/sec transfered the last 30 seconds') Trying other mirror. mirror2-dc.clearsdn.com/clearos/core/6/x86_64/repodata/primary.sqlite.bz2: [Errno 12] Timeout on mirror2-dc.clearsdn.com/clearos/core/6/x86_64/repodata/primary.sqlite.bz2: (28, 'Operation too slow. Less than 1 bytes/sec transfered the last 30 seconds') Trying other mirror. mirror1.timburgess.net/clearos/core/6/x86_64/repodata/primary.sqlite.bz2: [Errno 12] Timeout on mirror1.timburgess.net/clearos/core/6/x86_64/repodata/primary.sqlite.bz2: (28, 'Operation too slow. Less than 1 bytes/sec transfered the last 30 seconds') Trying other mirror. mirror2-houston.clearsdn.com/clearos/core/6/x86_64/repodata/primary.sqlite.bz2: [Errno 12] Timeout on mirror2-houston.clearsdn.com/clearos/core/6/x86_64/repodata/primary.sqlite.bz2: (28, 'Operation too slow. Less than 1 bytes/sec transfered the last 30 seconds') Trying other mirror. mirror3-toronto.clearsdn.com/clearos/core/6/x86_64/repodata/primary.sqlite.bz2: [Errno 12] Timeout on mirror3-toronto.clearsdn.com/clearos/core/6/x86_64/repodata/primary.sqlite.bz2: (28, 'Operation too slow. Less than 1 bytes/sec transfered the last 30 seconds') Trying other mirror. mirror2-dallas.clearsdn.com/clearos/core/6/x86_64/repodata/primary.sqlite.bz2: [Errno 12] Timeout on mirror2-dallas.clearsdn.com/clearos/core/6/x86_64/repodata/primary.sqlite.bz2: (28, 'O*peration too slow. Less than 1 bytes/sec transfered the last 30 seconds'*) Trying other mirror. mirror2-dc.clearsdn.com/clearos/core/6/x86_64/repodata/primary.sqlite.bz2: [Errno 12] Timeout on mirror2-dc.clearsdn.com/clearos/core/6/x86_64/repodata/primary.sqlite.bz2: (28, 'Operation too slow. Less than 1 bytes/sec transfered the last 30 seconds') Trying other mirror. mirror1.timburgess.net/clearos/core/6/x86_64/repodata/primary.sqlite.bz2: [Errno 12] Timeout on mirror1.timburgess.net/clearos/core/6/x86_64/repodata/primary.sqlite.bz2: (28, 'Operation too slow. Less than 1 bytes/sec transfered the last 30 seconds') Trying other mirror. mirror3-toronto.clearsdn.com/clearos/core/6/x86_64/repodata/primary.sqlite.bz2: [Errno 12] Timeout on mirror3-toronto.clearsdn.com/clearos/core/6/x86_64/repodata/primary.sqlite.bz2: (28, 'Operation too slow. Less than 1 bytes/sec transfered the last 30 seconds') Trying other mirror. Error: failure: repodata/primary.sqlite.bz2 from clearos-core: [Errno 256] No more mirrors to try. How can i fix this.i am able to access repo through web,and it seems nothing wrong with the repo.Where can be the problem. Tried yum clean all but it also didnt help. Is there a way to fix it as i am not able to install any package in it.

    Read the article

  • Can't save screen resolution setting.

    - by Searock
    Hi, My screen resolution in windows and previous version of Ubuntu (9.04) was 1152 x 864. But in Ubuntu 10.04 it gives me an option of 1024 x 786 and 1360 x 786. I have some how managed to add 1152x684 resolution by using xrandr command. searock@searock-desktop:~$ cvt 1152 864 1152x864 59.96 Hz (CVT 1.00M3) hsync: 53.78 kHz; pclk: 81.75 MHz Modeline "1152x864_60.00" 81.75 1152 1216 1336 1520 864 867 871 897 -hsync +vsync searock@searock-desktop:~$ xrandr --newmode "1152x864_60.00" 81.75 1152 1216 1336 1520 864 867 871 897 -hsync +vsync searock@searock-desktop:~$ xrandr --addmode S-video 1152x864 xrandr: cannot find output "S-video" searock@searock-desktop:~$ xrandr Screen 0: minimum 320 x 200, current 1024 x 768, maximum 4096 x 4096 VGA1 connected 1024x768+0+0 (normal left inverted right x axis y axis) 0mm x 0mm 1360x768 59.8 1024x768 60.0* 800x600 60.3 56.2 848x480 60.0 640x480 59.9 59.9 1152x864_60.00 (0x124) 81.0MHz h: width 1152 start 1216 end 1336 total 1520 skew 0 clock 53.3KHz v: height 864 start 867 end 871 total 897 clock 59.4Hz searock@searock-desktop:~$ xrandr --addmode VGA1 1152x864_60.00 But the problem is when ever I restart my computer I get this message. Could not apply the stored configuration for the monitors. Could not find a suitable configuration of screens. And then it comes back to 1024 x 786 My graphic card details : Intel(R) 82945G Express Chipset Family. Is there any way I can fix this once for all ? Thanks. Edit 1 : rumtscho has suggested me to modify xorg.conf file. But I am not sure what HorizSync means? is it Horizontal frequency ? My monitor model is Acer v173. Here's my specification. So what should be HorizSync and VertRefresh ? Edit 2 : I have edited my Xorg.conf file as follows : Section "Monitor" Identifier "Configured Monitor" HorizSync 30-80 VertRefresh 55-75 EndSection then I added the resolution and restarted my computer and still I am facing the same problem. Is there something that I am missing? Edit 3 : For now I have edited /etc/gdm/Init/Default(gdm startup scripts) to include following xrandr commands, just below line initctl -q emit login-session-start DISPLAY_MANAGER=gdm xrandr --newmode "1152x864_60.00" 81.75 1152 1216 1336 1520 864 867 871 897 -hsync +vsync xrandr --addmode VGA1 1152x864_60.00<br/> xrandr -s 1152x864_60.00 This has solved my problem, but this commands have increased my computer's boot time. I think I will have to edit xorg file properly. Edit 4 : Instead of adding this files to gdm startup scripts I have created a shell script and added it to startup (System - Preference - Startup Applications) #!/bin/bash xrandr --newmode "1152x864_60.00" 81.75 1152 1216 1336 1520 864 867 871 897 -hsync +vsync xrandr --addmode VGA1 1152x864_60.00 xrandr -s 1152x864_60.00 And don't forget to add execution rights. (Right Click - Properties - Permission - Allow executing file as program)

    Read the article

  • Postfix + SASLAUTHD + MySQL authentication problems

    - by Or W
    I've been trying to sort this out for the past 6 hours or so, this is the error message I'm facing (Running CentOS x64): /var/log/maillog: Jun 22 20:42:49 ptroa postfix/smtpd[10130]: warning: SASL authentication failure: Password verification failed Jun 22 20:42:49 ptroa postfix/smtpd[10130]: warning: bzq-79-177-192-133.red.bezeqint.net[79.177.192.133]: SASL PLAIN authentication failed: authentication failure Jun 22 20:42:49 ptroa postfix/smtpd[10130]: warning: bzq-79-177-192-133.red.bezeqint.net[79.177.192.133]: SASL LOGIN authentication failed: authentication failure /var/log/messages: Jun 22 20:15:38 ptroa saslauthd[9401]: do_auth : auth failure: [user=myuser] [service=smtp] [realm=domain.com] [mech=pam] [reason=PAM auth error] I have dovecot installed as well and I'm able to receive emails via the MySQL authentication. The problem is when I'm trying to use SMTP to send out emails. Some config files: /etc/postfix/main.cf: # See /usr/share/postfix/main.cf.dist for a commented, more complete version # Debian specific: Specifying a file name will cause the first # line of that file to be used as the name. The Debian default # is /etc/mailname. myorigin = /etc/mailname smtpd_banner = Server Message biff = no # appending .domain is the MUA's job. append_dot_mydomain = no # Uncomment the next line to generate "delayed mail" warnings #delay_warning_time = 4h readme_directory = /usr/share/doc/postfix # TLS parameters smtpd_tls_cert_file = /etc/postfix/smtpd.cert smtpd_tls_key_file = /etc/postfix/smtpd.key smtpd_use_tls = yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache # See /usr/share/doc/postfix/TLS_README.gz in the postfix-doc package for # information on enabling SSL in the smtp client. myhostname = domain.com alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases myorigin = /etc/mailname mydestination = relayhost = mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = all html_directory = /usr/share/doc/postfix/html message_size_limit = 30720000 virtual_alias_domains = virtual_alias_maps = proxy:mysql:/etc/postfix/mysql-virtual_forwardings.cf, mysql:/etc/postfix/mysql-virtual_email2email.cf virtual_mailbox_domains = proxy:mysql:/etc/postfix/mysql-virtual_domains.cf virtual_mailbox_maps = proxy:mysql:/etc/postfix/mysql-virtual_mailboxes.cf virtual_mailbox_base = /home/vmail virtual_uid_maps = static:5000 virtual_gid_maps = static:5000 smtpd_sasl_auth_enable = yes broken_sasl_auth_clients = yes smtpd_sasl_authenticated_header = yes smtpd_recipient_restrictions = permit_mynetworks, permit_sasl_authenticated, reject_unauth_destination virtual_create_maildirsize = yes virtual_maildir_extended = yes proxy_read_maps = $local_recipient_maps $mydestination $virtual_alias_maps $virtual_alias_domains $virtual_mailbox_maps $virtual_mailbox_domains $relay_recipient_maps $relay_domains $canonical_maps $sender_canonical_maps $recipient_cano$ virtual_transport = dovecot dovecot_destination_recipient_limit = 1 /etc/default/saslauthd: START=yes DESC="SASL Authentication Daemon" NAME="saslauthd" MECHANISMS="pam" MECH_OPTIONS="" THREADS=5 OPTIONS="-c -m /var/spool/postfix/var/run/saslauthd -r" /etc/pam.d/smtp: #%PAM-1.0 #auth include password-auth #account include password-auth auth required pam_mysql.so user=mail_admin passwd=password host=127.0.0.1 db=mail table=users usercolumn=email passwdcolumn=password crypt=1 verbose=1 account sufficient pam_mysql.so user=mail_admin passwd=password host=127.0.0.1 db=mail table=users usercolumn=email passwdcolumn=password crypt=1 verbose=1

    Read the article

  • Windows7 issue in mutli- tasking and memory

    - by Nitesh
    I seeming some problem in my windows OS recently, let me first say my system configuration. processor - Intel(R) Core(TM)2 Quad CPU Q8400 @ 2.66 GHz Installed memory (RAM) - 4.00 GB (3.00 GB usable) System type - 32 bit operating system I am using two OS in this system, first one is Windows7 and the other is centOS. Well, I am using this from a long time there was no problem , and all of a sudden since from couple weeks I am facing problems in my Windows7 OS. In windows7 i was nearing using multiple jobs almost every time i log in, there was no problem but now i don't no what happen I am not able to do multiple jobs at same time. For example- 1 I am now not able to listen to music in windows media player and view photo's. All of a sudden the system stops working and does not respond and then respond after 5mins and the music get played where it got stopped after 5 mins. 2 When i start browersing internet it hangs all of sudden and doesn't respond for 2 or 3 mins and gets loading. I mean it just happens for every operation i do in the system. Even now typing was also difficult, it gets hanged very frequently even though i am doing single task. I have never come across this kind of problem before. So the first thing i did was to see the useage of the processor and the memory. Well, i thick the useage of the processor was fine, for single task the useage was some where around 3 to 5%. Well, it was something weird i found in the memory, in spite of no task that i was running it was using somewhere around 34 to 41% of memory. So i opened the task manager and click on resource monitor in performance tab. And in the memory section of the monitoring tool i found the usage of my RAM, it was something like this. Hardware reserved - 1029 MB In Use - 1430 MB Modified - 49 MB Standby - 1566 MB free - 22 MB And i could also see Available 1588 MB Cached 1615 MB Total 3067 MB Installed 4096 MB well, this if all i could find out and i have no idea why my computer is acting so weird all of a sudden and the performance problem is growing day by day and i also don't know if there is problem in Bios, i have let it for default settings from long time. please help me and Thank you in advance for reading this and helping me.

    Read the article

  • MongoDB and datasets that don't fit in RAM no matter how hard you shove

    - by sysadmin1138
    This is very system dependent, but chances are near certain we'll scale past some arbitrary cliff and get into Real Trouble. I'm curious what kind of rules-of-thumb exist for a good RAM to Disk-space ratio. We're planning our next round of systems, and need to make some choices regarding RAM, SSDs, and how much of each the new nodes will get. But now for some performance details! During normal workflow of a single project-run, MongoDB is hit with a very high percentage of writes (70-80%). Once the second stage of the processing pipeline hits, it's extremely high read as it needs to deduplicate records identified in the first half of processing. This is the workflow for which "keep your working set in RAM" is made for, and we're designing around that assumption. The entire dataset is continually hit with random queries from end-user derived sources; though the frequency is irregular, the size is usually pretty small (groups of 10 documents). Since this is user-facing, the replies need to be under the "bored-now" threshold of 3 seconds. This access pattern is much less likely to be in cache, so will be very likely to incur disk hits. A secondary processing workflow is high read of previous processing runs that may be days, weeks, or even months old, and is run infrequently but still needs to be zippy. Up to 100% of the documents in the previous processing run will be accessed. No amount of cache-warming can help with this, I suspect. Finished document sizes vary widely, but the median size is about 8K. The high-read portion of the normal project processing strongly suggests the use of Replicas to help distribute the Read traffic. I have read elsewhere that a 1:10 RAM-GB to HD-GB is a good rule-of-thumb for slow disks, As we are seriously considering using much faster SSDs, I'd like to know if there is a similar rule of thumb for fast disks. I know we're using Mongo in a way where cache-everything really isn't going to fly, which is why I'm looking at ways to engineer a system that can survive such usage. The entire dataset will likely be most of a TB within half a year and keep growing.

    Read the article

  • How to link specific ports to specific domains with Apache virtual hosts?

    - by theJoe
    We have a forward-facing linux box running Apache HTTP server that is acting as a reverse proxy for several back-end servers. The servers are accessed through specific domain names and ports and are set up as virtual hosts within Apache as such: Listen 8001 Listen 8002 <Virtualhost *:8001> ServerName service.one.mycompany.com ProxyPass / http://internal.one.mycompany.com:8001/ ProxyPassReverse / http://internal.one.mycompany.com:8001/ RewriteEngine On RewriteCond %{REQUEST_METHOD} ^(TRACE|TRACK) RewriteRule .* - [F] </Virtualhost> <Virtualhost *:8002> ServerName service.two.mycompany.com ProxyPass / http://internal.two.mycompany.com:8002/ ProxyPassReverse / http://internal.two.mycompany.com:8002/ RewriteEngine On RewriteCond %{REQUEST_METHOD} ^(TRACE|TRACK) RewriteRule .* - [F] </Virtualhost> The proxy server has only one IP address, and both domains are pointing to it. Accessing internal.one via service.one works fine, as does accessing internal.two via service.two. Now the problem is that Apache does not take the requesting domain into account when accessing the virtual hosts. What I mean is that both domains work for both ports: requests for service.one:8002 proxies to internal.two:8002, and requests for service.two:8001 proxies to internal.one:8001, where ideally both these requests should be denied. I can get around this by creating more virtual hosts that explicitly deny these requests: NameVirtualHost *:8001 NameVirtualHost *:8002 <Virtualhost *:8001> ServerName service.two.mycompany.com Redirect permanent / http://errorpage.mycompany.com/ </Virtualhost> <Virtualhost *:8002> ServerName service.one.mycompany.com Redirect permanent / http://errorpage.mycompany.com/ </Virtualhost> But this is not an ideal solution, since we plan to add more services to the proxy, and each new port would need to be explicitly denied on all the other domains, and each new domain would need to be explicitly denied on all ports it is not utilizing. As we add more services, the number of virtual hosts can get out of hand quickly. My question, then, is whether there is a better way? Can we explicitly tie specific ports to specific domains in a virtual host so that only that domain-port combination is processed, and all other combinations are not? Things I’ve tried: Adding NameVirtualHost *:8001, etc. without the additional virtual hosts. Setting ProxyRequests On and Off, as well as ProxyPreserveHost On and Off Adding the server name or IP address to the virtual host header, e.g. <VirtualHost service.one.mycompany.com:8001> Using the <proxy> directive inside the virtual host directive. Lots and lots of googling. The proxy server is running CentOS 6.2 64-bit, Apache HTTPD server 2.2.15. As mentioned, the proxy server has only one IP address, and all the domains we are using are pointing to it.

    Read the article

  • jenkins-maven-android when running throwing the error "android-sdk-linux/platforms" is not a directory"

    - by Sam
    I start setting up the jenkins-maven-android and i'm facing an issue when running the jenkin job. My Machine Details $uname -a Linux development2 3.0.0-12-virtual #20-Ubuntu SMP Fri Oct 7 18:19:02 UTC 2011 x86_64 x86_64 x86_64 GNU/Linux Steps to install the Android SDK in Ubuntu https://help.ubuntu.com/community/AndroidSDK since i'm working on headless env (ssh to client machine) i used following command to install the platform tools android update sdk --no-ui download apache maven and install on http://maven.apache.org/download.html mvn -version output root@development2:/opt/android-sdk-linux/tools# mvn -version Apache Maven 3.0.4 (r1232337; 2012-01-17 08:44:56+0000) Maven home: /opt/apache-maven-3.0.4 Java version: 1.6.0_24, vendor: Sun Microsystems Inc. Java home: /usr/lib/jvm/java-6-openjdk/jre Default locale: en_US, platform encoding: UTF-8 OS name: "linux", version: "3.0.0-12-virtual", arch: "amd64", family: "unix" root@development2:/opt/android-sdk-linux/tools# ran the following two command as mention in below sudo apt-get update sudo apt-get install ia32-libs Problems with Eclipse and Android SDK http://developer.android.com/sdk/installing/index.html As error suggest i gave the path to android SDK in jenkins build config still im getting the error clean install -Dandroid.sdk.path=/opt/android-sdk-linux Can someone help me to resolve this. Thanks Error I'm Getting Waiting for Jenkins to finish collecting data mavenExecutionResult exceptions not empty message : Failed to execute goal com.jayway.maven.plugins.android.generation2:android-maven-plugin:3.1.1:generate-sources (default-generate-sources) on project base-template: Execution default-generate-sources of goal com.jayway.maven.plugins.android.generation2:android-maven-plugin:3.1.1:generate-sources failed: Path "/opt/android-sdk-linux/platforms" is not a directory. Please provide a proper Android SDK directory path as configuration parameter <sdk><path>...</path></sdk> in the plugin <configuration/>. As an alternative, you may add the parameter to commandline: -Dandroid.sdk.path=... or set environment variable ANDROID_HOME. cause : Execution default-generate-sources of goal com.jayway.maven.plugins.android.generation2:android-maven-plugin:3.1.1:generate-sources failed: Path "/opt/android-sdk-linux/platforms" is not a directory. Please provide a proper Android SDK directory path as configuration parameter <sdk><path>...</path></sdk> in the plugin <configuration/>. As an alternative, you may add the parameter to commandline: -Dandroid.sdk.path=... or set environment variable ANDROID_HOME. Stack trace : org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute goal com.jayway.maven.plugins.android.generation2:android-maven-plugin:3.1.1:generate-sources (default-generate-sources) on project base-template: Execution default-generate-sources of goal com.jayway.maven.plugins.android.generation2:android-maven-plugin:3.1.1:generate-sources failed: Path "/opt/android-sdk-linux/platforms" is not a directory. Please provide a proper Android SDK directory path as configuration parameter <sdk><path>...</path></sdk> in the plugin <configuration/>. As an alternative, you may add the parameter to commandline: -Dandroid.sdk.path=... or set environment variable ANDROID_HOME. at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:225) at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153) at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145) at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:84) at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:59) at org.apache.maven.lifecycle.internal.LifecycleStarter.singleThreadedBuild(LifecycleStarter.java:183) at org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:161) at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:320) at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:156) at org.jvnet.hudson.maven3.launcher.Maven3Launcher.main(Maven3Launcher.java:79) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:616) at org.codehaus.plexus.classworlds.launcher.Launcher.launchStandard(Launcher.java:329) at org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:239) at org.jvnet.hudson.maven3.agent.Maven3Main.launch(Maven3Main.java:158) at hudson.maven.Maven3Builder.call(Maven3Builder.java:98) at hudson.maven.Maven3Builder.call(Maven3Builder.java:64) at hudson.remoting.UserRequest.perform(UserRequest.java:118) at hudson.remoting.UserRequest.perform(UserRequest.java:48) at hudson.remoting.Request$2.run(Request.java:326) at hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:72) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334) at java.util.concurrent.FutureTask.run(FutureTask.java:166) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603) at java.lang.Thread.run(Thread.java:679) Caused by: org.apache.maven.plugin.PluginExecutionException: Execution default-generate-sources of goal com.jayway.maven.plugins.android.generation2:android-maven-plugin:3.1.1:generate-sources failed: Path "/opt/android-sdk-linux/platforms" is not a directory. Please provide a proper Android SDK directory path as configuration parameter <sdk><path>...</path></sdk> in the plugin <configuration/>. As an alternative, you may add the parameter to commandline: -Dandroid.sdk.path=... or set environment variable ANDROID_HOME. at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:110) at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:209) ... 27 more Caused by: com.jayway.maven.plugins.android.InvalidSdkException: Path "/opt/android-sdk-linux/platforms" is not a directory. Please provide a proper Android SDK directory path as configuration parameter <sdk><path>...</path></sdk> in the plugin <configuration/>. As an alternative, you may add the parameter to commandline: -Dandroid.sdk.path=... or set environment variable ANDROID_HOME. at com.jayway.maven.plugins.android.AndroidSdk.assertPathIsDirectory(AndroidSdk.java:125) at com.jayway.maven.plugins.android.AndroidSdk.getPlatformDirectories(AndroidSdk.java:285) at com.jayway.maven.plugins.android.AndroidSdk.findAvailablePlatforms(AndroidSdk.java:260) at com.jayway.maven.plugins.android.AndroidSdk.<init>(AndroidSdk.java:80) at com.jayway.maven.plugins.android.AbstractAndroidMojo.getAndroidSdk(AbstractAndroidMojo.java:844) at com.jayway.maven.plugins.android.phase01generatesources.GenerateSourcesMojo.generateR(GenerateSourcesMojo.java:329) at com.jayway.maven.plugins.android.phase01generatesources.GenerateSourcesMojo.execute(GenerateSourcesMojo.java:102) at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:101) ... 28 more channel stopped Finished: FAILURE* android home Echo root@development2:~# echo $ANDROID_HOME /opt/android-sdk-linux

    Read the article

  • SQL SERVER – Disabled Index and Update Statistics

    - by pinaldave
    When we try to update the statistics, it throws an error as if the clustered index is disabled. Now let us enable the clustered index only and attempt to update the statistics of the table right after that. Have you ever come across the situation where a conversation never gets over and it continues even though original point of discussion has passed. I am facing the same situation in the case of Disabled Index. Here is the link to original conversations. SQL SERVER – Disable Clustered Index and Data Insert – Reader had a issue here with Disabled Index SQL SERVER – Understanding ALTER INDEX ALL REBUILD with Disabled Clustered Index – Reader asked the effect of Rebuilding Indexes The same reader asked me today – “I understood what the disabled indexes do; what is their effect on statistics. Is it true that even though indexes are disabled, they continue updating the statistics?“ The answer is very interesting: If you have disabled clustered index, you will be not able to update the statistics at all for any index. If you have enabled clustered index and disabled non clustered index when you update the statistics of the table, it automatically updates the statistics of the ALL (disabled and enabled – both) the indexes on the table. If you are not satisfied with the answer, let us go over a simple example. I have written necessary comments in the code itself to have a clear idea. USE tempdb GO -- Drop Table if Exists IF EXISTS (SELECT * FROM sys.objects WHERE OBJECT_ID = OBJECT_ID(N'[dbo].[TableName]') AND type IN (N'U')) DROP TABLE [dbo].[TableName] GO -- Create Table CREATE TABLE [dbo].[TableName]( [ID] [int] NOT NULL, [FirstCol] [varchar](50) NULL ) GO -- Insert Some data INSERT INTO TableName SELECT 1, 'First' UNION ALL SELECT 2, 'Second' UNION ALL SELECT 3, 'Third' UNION ALL SELECT 4, 'Fourth' UNION ALL SELECT 5, 'Five' GO -- Create Clustered Index ALTER TABLE [TableName] ADD CONSTRAINT [PK_TableName] PRIMARY KEY CLUSTERED ([ID] ASC) GO -- Create Nonclustered Index CREATE UNIQUE NONCLUSTERED INDEX [IX_NonClustered_TableName] ON [dbo].[TableName] ([FirstCol] ASC) GO -- Check that all the indexes are enabled SELECT OBJECT_NAME(OBJECT_ID), Name, type_desc, is_disabled FROM sys.indexes WHERE OBJECT_NAME(OBJECT_ID) = 'TableName' GO Now let us update the statistics of the table and check the statistics update date. -- Update the stats of table UPDATE STATISTICS TableName WITH FULLSCAN GO -- Check Statistics Last Updated Datetime SELECT name AS index_name, STATS_DATE(OBJECT_ID, index_id) AS StatsUpdated FROM sys.indexes WHERE OBJECT_ID = OBJECT_ID('TableName') GO Now let us disable the indexes and check if they are disabled using sys.indexes. -- Disable Indexes -- Disable Nonclustered Index ALTER INDEX [IX_NonClustered_TableName] ON [dbo].[TableName] DISABLE GO -- Disable Clustered Index ALTER INDEX [PK_TableName] ON [dbo].[TableName] DISABLE GO -- Check that all the indexes are disabled SELECT OBJECT_NAME(OBJECT_ID), Name, type_desc, is_disabled FROM sys.indexes WHERE OBJECT_NAME(OBJECT_ID) = 'TableName' GO Let us try to update the statistics of the table. -- Update the stats of table UPDATE STATISTICS TableName WITH FULLSCAN GO /* -- Above operation should thrown following error Msg 1974, Level 16, State 1, Line 1 Cannot perform the specified operation on table 'TableName' because its clustered index 'PK_TableName' is disabled. */ When we try to update the statistics it throws an error as it clustered index is disabled. Now let us enable the clustered index only and attempt to update the statistics of the table right after that. -- Now let us rebuild clustered index only ALTER INDEX [PK_TableName] ON [dbo].[TableName] REBUILD GO -- Check that all the indexes status SELECT OBJECT_NAME(OBJECT_ID), Name, type_desc, is_disabled FROM sys.indexes WHERE OBJECT_NAME(OBJECT_ID) = 'TableName' GO -- Check Statistics Last Updated Datetime SELECT name AS index_name, STATS_DATE(OBJECT_ID, index_id) AS StatsUpdated FROM sys.indexes WHERE OBJECT_ID = OBJECT_ID('TableName') GO -- Update the stats of table UPDATE STATISTICS TableName WITH FULLSCAN GO -- Check Statistics Last Updated Datetime SELECT name AS index_name, STATS_DATE(OBJECT_ID, index_id) AS StatsUpdated FROM sys.indexes WHERE OBJECT_ID = OBJECT_ID('TableName') GO We can clearly see that even though the nonclustered index is disabled it is also updated. If you do not need a nonclustered index, I suggest you to drop it as keeping them disabled is an overhead on your system. This is because every time the statistics are updated for system all the statistics for disabled indexesare also updated. -- Clean up DROP TABLE [TableName] GO The complete script is given below for easy reference. USE tempdb GO -- Drop Table if Exists IF EXISTS (SELECT * FROM sys.objects WHERE OBJECT_ID = OBJECT_ID(N'[dbo].[TableName]') AND type IN (N'U')) DROP TABLE [dbo].[TableName] GO -- Create Table CREATE TABLE [dbo].[TableName]( [ID] [int] NOT NULL, [FirstCol] [varchar](50) NULL ) GO -- Insert Some data INSERT INTO TableName SELECT 1, 'First' UNION ALL SELECT 2, 'Second' UNION ALL SELECT 3, 'Third' UNION ALL SELECT 4, 'Fourth' UNION ALL SELECT 5, 'Five' GO -- Create Clustered Index ALTER TABLE [TableName] ADD CONSTRAINT [PK_TableName] PRIMARY KEY CLUSTERED ([ID] ASC) GO -- Create Nonclustered Index CREATE UNIQUE NONCLUSTERED INDEX [IX_NonClustered_TableName] ON [dbo].[TableName] ([FirstCol] ASC) GO -- Check that all the indexes are enabled SELECT OBJECT_NAME(OBJECT_ID), Name, type_desc, is_disabled FROM sys.indexes WHERE OBJECT_NAME(OBJECT_ID) = 'TableName' GO -- Update the stats of table UPDATE STATISTICS TableName WITH FULLSCAN GO -- Check Statistics Last Updated Datetime SELECT name AS index_name, STATS_DATE(OBJECT_ID, index_id) AS StatsUpdated FROM sys.indexes WHERE OBJECT_ID = OBJECT_ID('TableName') GO -- Disable Indexes -- Disable Nonclustered Index ALTER INDEX [IX_NonClustered_TableName] ON [dbo].[TableName] DISABLE GO -- Disable Clustered Index ALTER INDEX [PK_TableName] ON [dbo].[TableName] DISABLE GO -- Check that all the indexes are disabled SELECT OBJECT_NAME(OBJECT_ID), Name, type_desc, is_disabled FROM sys.indexes WHERE OBJECT_NAME(OBJECT_ID) = 'TableName' GO -- Update the stats of table UPDATE STATISTICS TableName WITH FULLSCAN GO /* -- Above operation should thrown following error Msg 1974, Level 16, State 1, Line 1 Cannot perform the specified operation on table 'TableName' because its clustered index 'PK_TableName' is disabled. */ -- Now let us rebuild clustered index only ALTER INDEX [PK_TableName] ON [dbo].[TableName] REBUILD GO -- Check that all the indexes status SELECT OBJECT_NAME(OBJECT_ID), Name, type_desc, is_disabled FROM sys.indexes WHERE OBJECT_NAME(OBJECT_ID) = 'TableName' GO -- Check Statistics Last Updated Datetime SELECT name AS index_name, STATS_DATE(OBJECT_ID, index_id) AS StatsUpdated FROM sys.indexes WHERE OBJECT_ID = OBJECT_ID('TableName') GO -- Update the stats of table UPDATE STATISTICS TableName WITH FULLSCAN GO -- Check Statistics Last Updated Datetime SELECT name AS index_name, STATS_DATE(OBJECT_ID, index_id) AS StatsUpdated FROM sys.indexes WHERE OBJECT_ID = OBJECT_ID('TableName') GO -- Clean up DROP TABLE [TableName] GO Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Index, SQL Optimization, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: SQL Statistics

    Read the article

  • Troubleshooting Application Timeouts in SQL Server

    - by Tara Kizer
    I recently received the following email from a blog reader: "We are having an OLTP database instance, using SQL Server 2005 with little to moderate traffic (10-20 requests/min). There are also bulk imports that occur at regular intervals in this DB and the import duration ranges between 10secs to 1 min, depending on the data size. Intermittently (2-3 times in a week), we face an issue, where queries get timed out (default of 30 secs set in application). On analyzing, we found two stored procedures, having queries with multiple table joins inside them of taking a long time (5-10 mins) in getting executed, when ideally the execution duration ranges between 5-10 secs. Execution plan of the same displayed Clustered Index Scan happening instead of Clustered Index Seek. All required Indexes are found to be present and Index fragmentation is also minimal as we Rebuild Indexes regularly alongwith Updating Statistics. With no other alternate options occuring to us, we restarted SQL server and thereafter the performance was back on track. But sometimes it was still giving timeout errors for some hits and so we also restarted IIS and that stopped the problem as of now." Rather than respond directly to the blog reader, I thought it would be more interesting to share my thoughts on this issue in a blog. There are a few things that I can think of that could cause abnormal timeouts: Blocking Bad plan in cache Outdated statistics Hardware bottleneck To determine if blocking is the issue, we can easily run sp_who/sp_who2 or a query directly on sysprocesses (select * from master..sysprocesses where blocking <> 0).  If blocking is present and consistent, then you'll need to determine whether or not to kill the parent blocking process.  Killing a process will cause the transaction to rollback, so you need to proceed with caution.  Killing the parent blocking process is only a temporary solution, so you'll need to do more thorough analysis to figure out why the blocking was present.  You should look into missing indexes and perhaps consider changing the database's isolation level to READ_COMMITTED_SNAPSHOT. The blog reader mentions that the execution plan shows a clustered index scan when a clustered index seek is normal for the stored procedure.  A clustered index scan might have been chosen either because that is what is in cache already or because of out of date statistics.  The blog reader mentions that bulk imports occur at regular intervals, so outdated statistics is definitely something that could cause this issue.  The blog reader may need to update statistics after imports are done if the imports are changing a lot of data (greater than 10%).  If the statistics are good, then the query optimizer might have chosen to scan rather than seek in a previous execution because the scan was determined to be less costly due to the value of an input parameter.  If this parameter value is rare, then its execution plan in cache is what we call a bad plan.  You want the best plan in cache for the most frequent parameter values.  If a bad plan is a recurring problem on your system, then you should consider rewriting the stored procedure.  You might want to break up the code into multiple stored procedures so that each can have a different execution plan in cache. To remove a bad plan from cache, you can recompile the stored procedure.  An alternative method is to run DBCC FREEPROCACHE which drops the procedure cache.  It is better to recompile stored procedures rather than dropping the procedure cache as dropping the procedure cache affects all plans in cache rather than just the ones that were bad, so there will be a temporary performance penalty until the plans are loaded into cache again. To determine if there is a hardware bottleneck occurring such as slow I/O or high CPU utilization, you will need to run Performance Monitor on the database server.  Hopefully you already have a baseline of the server so you know what is normal and what is not.  Be on the lookout for I/O requests taking longer than 12 milliseconds and CPU utilization over 90%.  The servers that I support typically are under 30% CPU utilization, but your baseline could be higher and be within a normal range. If restarting the SQL Server service fixes the problem, then the problem was most likely due to blocking or a bad plan in the procedure cache.  Rather than restarting the SQL Server service, which causes downtime, the blog reader should instead analyze the above mentioned things.  Proceed with caution when restarting the SQL Server service as all transactions that have not completed will be rolled back at startup.  This crash recovery process could take longer than normal if there was a long-running transaction running when the service was stopped.  Until the crash recovery process is completed on the database, it is unavailable to your applications. If restarting IIS fixes the problem, then the problem might not have been inside SQL Server.  Prior to taking this step, you should do analysis of the above mentioned things. If you can think of other reasons why the blog reader is facing this issue a few times a week, I'd love to hear your thoughts via a blog comment.

    Read the article

  • Developer Training – Employee Morals and Ethics – Part 2

    - by pinaldave
    Developer Training - Importance and Significance - Part 1 Developer Training – Employee Morals and Ethics – Part 2 Developer Training – Difficult Questions and Alternative Perspective - Part 3 Developer Training – Various Options for Developer Training – Part 4 Developer Training – A Conclusive Summary- Part 5 If you have been reading this series of posts about Developer Training, you can probably determine where my mind lies in the matter – firmly “pro.”  There are many reasons to think that training is an excellent idea for the company.  In the end, it may seem like the company gets all the benefits and the employee has just wasted a few hours in a dark, stuffy room.  However, don’t let yourself be fooled, this is not the case! Training, Company and YOU! Do not forget, that as an employee, you are your company’s best asset.  Training is meant to benefit the company, of course, but in the end, YOU, the employee, is the one who walks away with a lot of useful knowledge in your head.  This post will discuss what to do with that knowledge, how to acquire it, and who should pay for it. Eternal Question – Who Pays for Training? When the subject of training comes up, money is often the sticky issue.  Some companies will argue that because the employee is the one who benefits the most, he or she should pay for it.  Of course, whenever money is discuss, emotions tend to follow along, and being told you have to pay money for mandatory training often results in very unhappy employees – the opposite result of what the training was supposed to accomplish.  Therefore, many companies will pay for the training.  However, if your company is reluctant to pay for necessary training, or is hesitant to pay for a specific course that is extremely expensive, there is always the art of compromise.  The employee and the company can split the cost of the training – after all, both the company and the employee will be benefiting. [Click on following image to answer important question] Click to Enlarge  This kind of “hybrid” pay scheme can be split any way that is mutually beneficial.  There is the obvious 50/50 split, but for extremely expensive classes or conferences, this still might be prohibitively expensive for the employee.  If you are facing this situation, here are some example solutions you could suggest to your employer:  travel reimbursement, paid leave, payment for only the tuition.  There are even more complex solutions – the company could pay back the employee after the training and project has been completed. Training is not Vacation Once the classes have been settled on, and the question of payment has been answered, it is time to attend your class or travel to your conference!  The first rule is one that your mothers probably instilled in you as well – have a good attitude.  While you might be looking forward to your time off work, going to an interesting class, hopefully with some friends and coworkers, but do not mistake this time as a vacation.  It can be tempting to only have fun, but don’t forget to learn as well.  I call this “attending sincerely.”  Pay attention, have an open mind and good attitude, and don’t forget to take notes!  You might be surprised how many people will want to see what you learned when you go back. Report Back the Learning When you get back to work, those notes will come in handy.  Your supervisor and coworkers might want you to give a short presentation about what you learned.  Attending these classes can make you almost a celebrity.  Don’t be too nervous about these presentations, and don’t feel like they are meant to be a test of your dedication.  Many people will be genuinely curious – and maybe a little jealous that you go to go learn something new.  Be generous with your notes and be willing to pass your learning on to others through mini-training sessions of your own. [Click on following image to answer important question] Click to Enlarge Practice New Learning On top of helping to train others, don’t forget to put your new knowledge to use!  Your notes will come in handy for this, and you can even include your plans for the future in your presentation when you return.  This is a good way to demonstrate to your bosses that the money they paid (hopefully they paid!) is going to be put to good use. Feedback to Manager When you return, be sure to set aside a few minutes to talk about your training with your manager.  Be perfectly honest – your manager wants to know the good and the bad.  If you had a truly miserable time, do not lie and say it was the best experience – you and others may be forced to attend the same training over and over again!  Of course, you do not want to sound like a complainer, so make sure that your summary includes the good news as well.  Your manager may be able to help you understand more of what they wanted you to learn, too. Win-Win Situation In the end, remember that training is supposed to be a benefit to the employer as well as the employee.  Make sure that you share your information and that you give feedback about how you felt the sessions went as well as how you think this training can be implemented at the company immediately. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Developer Training, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Integrate BING API for Search inside ASP.Net web application

    - by sreejukg
    As you might already know, Bing is the Microsoft Search engine and is getting popular day by day. Bing offers APIs that can be integrated into your website to increase your website functionality. At this moment, there are two important APIs available. They are Bing Search API Bing Maps The Search API enables you to build applications that utilize Bing’s technology. The API allows you to search multiple source types such as web; images, video etc. and supports various output prototypes such as JSON, XML, and SOAP. Also you will be able to customize the search results as you wish for your public facing website. Bing Maps API allows you to build robust applications that use Bing Maps. In this article I am going to describe, how you can integrate Bing search into your website. In order to start using Bing, First you need to sign in to http://www.bing.com/toolbox/bingdeveloper/ using your windows live credentials. Click on the Sign in button, you will be asked to enter your windows live credentials. Once signed in you will be redirected to the Developer page. Here you can create applications and get AppID for each application. Since I am a first time user, I don’t have any applications added. Click on the Add button to add a new application. You will be asked to enter certain details about your application. The fields are straight forward, only thing you need to note is the website field, here you need to enter the website address from where you are going to use this application, and this field is optional too. Of course you need to agree on the terms and conditions and then click Save. Once you click on save, the application will be created and application ID will be available for your use. Now we got the APP Id. Basically Bing supports three protocols. They are JSON, XML and SOAP. JSON is useful if you want to call the search requests directly from the browser and use JavaScript to parse the results, thus JSON is the favorite choice for AJAX application. XML is the alternative for applications that does not support SOAP, e.g. flash/ Silverlight etc. SOAP is ideal for strongly typed languages and gives a request/response object model. In this article I am going to demonstrate how to search BING API using SOAP protocol from an ASP.Net application. For the purpose of this demonstration, I am going to create an ASP.Net project and implement the search functionality in an aspx page. Open Visual Studio, navigate to File-> New Project, select ASP.Net empty web application, I named the project as “BingSearchSample”. Add a Search.aspx page to the project, once added the solution explorer will looks similar to the following. Now you need to add a web reference to the SOAP service available from Bing. To do this, from the solution explorer, right click your project, select Add Service Reference. Now the new service reference dialog will appear. In the left bottom of the dialog, you can find advanced button, click on it. Now the service reference settings dialog will appear. In the bottom left, you can find Add Web Reference button, click on it. The add web reference dialog will appear now. Enter the URL as http://api.bing.net/search.wsdl?AppID=<YourAppIDHere>&version=2.2 (replace <yourAppIDHere> with the appID you have generated previously) and click on the button next to it. This will find the web service methods available. You can change the namespace suggested by Bing, but for the purpose of this demonstration I have accepted all the default settings. Click on the Add reference button once you are done. Now the web reference to Search service will be added your project. You can find this under solution explorer of your project. Now in the Search.aspx, that you previously created, place one textbox, button and a grid view. For the purpose of this demonstration, I have given the identifiers (ID) as txtSearch, btnSearch, gvSearch respectively. The idea is to search the text entered in the text box using Bing service and show the results in the grid view. In the design view, the search.aspx looks as follows. In the search.aspx.cs page, add a using statement that points to net.bing.api. I have added the following code for button click event handler. The code is very straight forward. It just calls the service with your AppID, a query to search and a source for searching. Let us run this page and see the output when I enter Microsoft in my textbox. If you want to search a specific site, you can include the site name in the query parameter. For e.g. the following query will search the word Microsoft from www.microsoft.com website. searchRequest.Query = “site:www.microsoft.com Microsoft”; The output of this query is as follows. Integrating BING search API to your website is easy and there is no limit on the customization of the interface you can do. There is no Bing branding required so I believe this is a great option for web developers when they plan for site search.

    Read the article

  • Review of Samsung Focus Windows Phone 7

    - by mbcrump
    I recently acquired a Samsung Focus Windows Phone 7 device from AT&T and wanted to share what I thought of it as an end-user. Before I get started, here are several of my write-ups for the Windows Phone 7. You may want to check out the second article titled: Hands-on WP7 Review of Prototype Hardware. From start to finish with the final version of Visual Studio Tools for Windows Phone 7 Hands-on : Windows Phone 7 Review on Prototype Hardware. Deploying your Windows Phone 7 Application to the actual hardware. Profile your Windows Phone 7 Application for Free Submitting a Windows Phone 7 Application to the Market. Samsung Focus i917 Phone Size: Perfect! I have been carrying around a Dell Streak (Android) and it is about half the size. It is really nice to have a phone that fits in your pocket without a lot of extra bulk. I bought a case for the Focus and it is still a perfect size.  The phone just feels right. Screen: It has a beautiful Super AMOLED 480x800 screen. I only wish it supported a higher resolution. The colors are beautiful especially in an Xbox Live Game.   3G: I use AT&T and I've had spotty reception. This really can't be blamed on the phone as much as the actual carrier. Battery: I've had excellent battery life compared to my iPhone and Android devices. I usually use my phone throughout the day on and off and still have a charge at the end of the day.  Camera/Video: I'm still looking for the option to send the video to YouTube or the Image to Twitter. The images look good, but the phone needs a forward facing camera. I like the iPhone/Android (Dell Streak) camera better. Built-in Speaker: Sounds great. It’s not a wimpy speaker that you cannot hear.  CPU: Very smooth transitioning from one screen to another. The prototype Windows Phone 7 that I had, was no where near as smooth. (It was also running a slower processor though). OS: I actually like the OS but a few things could be better. CONS: Copy and Paste (Supposed to come in the next update) We need more apps (Pandora missing was a big one for me and Slacker’s advertisement sucks!). As time passes, and more developers get on board then this will be fixed. The browser needs some major work. I have tried to make cross-platform (WP7, Android, iPhone and iPad) web apps and the browser that ships with WP7 just can’t handle it.  Apps need to be organized better. Instead of throw them all on one screen, it would help to allow the user to create categories. PROS: Hands down the best gaming experience on a phone. I have all three major phones (iphone, android and wp7). Nothing compares to the gaming experience on the WP7. The phone just works. I’ve had a LOT of glitches with my Android device. I’ve had maybe 2 with my WP7 device. Exchange and Office support are great. Nice integration with Twitter/Facebook and social media. Easy to navigate and find the information you need on one screen. Let’s look at a few pictures and we will wrap up with my final thoughts on the phone. WP7 Home Screen. Back of the phone is as stylish. It is hard to see due to the shadow but it is a very thin phone. What’s included? Manuals Ear buds Data Cable plus Power Adapter Phone Click a picture to enlarge So, what are my final thoughts on the Phone/OS? I love the Samsung Focus and would recommend it to anyone looking for a WP7 device. Like any first generation product, you need to give it a little while to mature. Right now the phone is missing several features that we are all used to using. That doesn’t mean a year from now it will be in the same situation. (I sure hope we won’t). If you are looking to get into mobile development, I believe WP7 is the easiest platform to develop from. This is especially true if you have a background in Silverlight or WPF.    Subscribe to my feed

    Read the article

< Previous Page | 76 77 78 79 80 81 82 83 84 85 86 87  | Next Page >