Search Results

Search found 32453 results on 1299 pages for 'osr wls oracle service re'.

Page 926/1299 | < Previous Page | 922 923 924 925 926 927 928 929 930 931 932 933  | Next Page >

  • using IntentExatras with Alarm Manager

    - by Ashwin
    I want to know if this code will work(I cannot try it out right now. Moreover, I have a few doubts that have to be cleared). Intent intent = new Intent(context, AlarmReceiver.class); intent.putExtra("user",global.getUsername()); intent.puExtra("password",global.getPassword); PendingIntent sender = PendingIntent.getBroadcast(context, 192837, intent, PendingIntent.FLAG_UPDATE_CURRENT); // Get the AlarmManager service Log.v("inside log_run", "new service started"); AlarmManager am = (AlarmManager) getSystemService(ALARM_SERVICE); am.setRepeating(AlarmManager.RTC_WAKEUP, IMMEDIATELY,60000,sender); finish(); As you can see, this code starts an AlarmManager with setRepeating(). If you see the intent(actually the pending intent) passed on to the BroadcastReceiver, there are two extras that are passed on. These are global variables that live as long as the Application is running. But this AlarmManager is meant to be run in the background (that is application will be alive only for the first few calls of the o fthe alrmamanager to the broadcast recevier) My Question Will AlarmManager make a copy of the global variables(the username and password) and maintain this copy to be passed along with the intent? Because, these values will be used in the broadcast receiver.

    Read the article

  • Exceptions handling in SQL?

    - by Vineet
    Is there any way to handle exceptions in sql(ORACLE 9i)? Since I was trying to divide values of a column that contains both numbers and literals ,I need to fetch out only numbers from it ,as if it divisible by any number then its number else if contains literals it would not get divided it will generate error. how to handle those errors? Please suggest!!

    Read the article

  • SQL running slow when i executes thru java

    - by user270885
    I am trying to execute a query in oracle db. When i try to run thru SQLTools the query is executing in 2 seconds and when i run the same query thru JAVA it is exectuting in more than a minute. I am using a hint /*+ordered use_nl (a, b)*/ I am using ojdbc6.jar Is it because of any JARS? Please help whats causing this?

    Read the article

  • Renaming the argument name in JAX-WS

    - by user182944
    I created a web service using JAX-WS in RSA 7.5 and Websphere 7 using bottom-up approach. When I open the WSDL in SOAP UI, then the arguments section is appearing like this: <!--Optional--> <arg0> <empID>?</empId> </arg0> <!--Optional--> <arg1> <empName>?</empName> </arg1> <!--Optional--> <arg2> <empAddress>?</empAddress> </arg2> <!--Optional--> <arg3> <empCountry>?</empCountry> </arg3> The service method takes the above 4 elements as the parameters to return the employee details. 1) I want to rename this arg0, arg1, and so on with some valid names. 2) I want to remove the <!--optional--> present above the arg tags. (For removing the <!--optional--> from elements name, I used @XMLElement(required=true)). But I am not sure where exactly to use this annotation in this case :( Please help. Regards,

    Read the article

  • asp code for excel upload

    - by vicky
    i want to upload an excel file from the client to the server using asp .then it shud check the field name of the file with the oracle database table and then save its content in the database. can somebody help me with this....??

    Read the article

  • asp.net selecting a checkbox from the grid view

    - by baburam
    i am trying to get checkbox value ,using c# In grid view the datasource is from oracle i have added the checkbox but the value is not coming please help me in checking the value of check box this is for selecting the candidate if the checkbox is selected the particular candidate name is selected and the candidate names are appended in a string

    Read the article

  • Why did Sun develop the Java platform? [closed]

    - by nic28
    Why did Sun (now owned by Oracle, I know) develop the Java Plaform? How does it make business sense? It seems to me like it would be a very expensive project (also, any ideas on how much they spent/are spending to develop/maintain the platform?). Are they making money by selling support or something?

    Read the article

  • Cannot login to Ubuntu 14.04 returns to the login screen

    - by Safder
    I updated to 14.04 from 12.04.03. I was using cinammon at that time. When I upgraded to 14.04 ,I got a serious problem. I was unable to login, as whenever I hit enter after entering password, it is taking me straight to login screen again.I unable to login like Guest even. So, here is my .xsession-errors file. possibly anyone could help. Thanks in advance. Script for ibus started at run_im. Script for auto started at run_im. Script for default started at run_im. init: at-spi2-registryd main process ended, respawning init: at-spi2-registryd main process ended, respawning init: at-spi2-registryd main process ended, respawning init: at-spi2-registryd main process ended, respawning init: at-spi2-registryd main process ended, respawning init: at-spi2-registryd main process ended, respawning init: at-spi2-registryd main process ended, respawning init: at-spi2-registryd main process ended, respawning init: at-spi2-registryd main process ended, respawning init: at-spi2-registryd main process ended, respawning init: at-spi2-registryd respawning too fast, stopped init: gnome-settings-daemon main process (14717) killed by TRAP signal init: gnome-settings-daemon main process ended, respawning init: update-notifier-crash (/var/crash/_usr_bin_gnome-session.1000.crash) main process (14607) terminated with status 133 init: update-notifier-crash (/var/crash/_usr_lib_ibus_ibus-x11.1000.crash) main process (14623) terminated with status 133 init: update-notifier-crash (/var/crash/_usr_lib_x86_64-linux-gnu_bamf_bamfdaemon.1000.crash) main process (14663) terminated with status 133 init: update-notifier-crash (/var/crash/_usr_lib_ibus_ibus-ui-gtk3.1000.crash) main process (14620) terminated with status 133 init: update-notifier-crash (/var/crash/_usr_lib_gnome-session_gnome-session-check-accelerated.1000.crash) main process (14612) terminated with status 133 init: update-notifier-crash (/var/crash/_usr_lib_gnome-settings-daemon_gnome-settings-daemon.1000.crash) main process (14616) terminated with status 133 init: update-notifier-crash (/var/crash/_usr_lib_unity_unity-panel-service.1000.crash) main process (14629) terminated with status 133 init: update-notifier-crash (/var/crash/_usr_lib_unity-settings-daemon_unity-settings-daemon.1000.crash) main process (14625) terminated with status 133 init: update-notifier-crash (/var/crash/_usr_lib_update-notifier_system-crash-notification.1000.crash) main process (14662) terminated with status 133 init: update-notifier-crash (/var/crash/_usr_lib_gnome-session_gnome-session-check-accelerated.127.crash) main process (14613) terminated with status 133 init: update-notifier-crash (/var/crash/_usr_share_apport_apport-gtk.1000.crash) main process (14665) terminated with status 133 init: update-notifier-crash (/var/crash/linux-image-3.13.0-24-generic.0.crash) main process (14606) terminated with status 133 init: update-notifier-crash (/var/crash/_usr_bin_Xorg.0.crash) main process (14609) terminated with status 133 init: update-notifier-crash (/var/crash/_usr_lib_ibus_ibus-x11.127.crash) main process (14624) terminated with status 133 init: update-notifier-crash (/var/crash/_usr_lib_ibus_ibus-ui-gtk3.127.crash) main process (14622) terminated with status 133 init: update-notifier-crash (/var/crash/_usr_lib_gnome-settings-daemon_gnome-settings-daemon.127.crash) main process (14618) terminated with status 133 init: update-notifier-crash (/var/crash/_usr_lib_unity-settings-daemon_unity-settings-daemon.127.crash) main process (14628) terminated with status 133 init: update-notifier-crash (/var/crash/_usr_lib_x86_64-linux-gnu_bamf_bamfdaemon.127.crash) main process (14664) terminated with status 133 init: update-notifier-crash (/var/crash/_usr_bin_gnome-session.127.crash) main process (14608) terminated with status 133 init: update-notifier-crash (/var/crash/_usr_lib_unity_unity-panel-service.127.crash) main process (14634) terminated with status 133 init: update-notifier-crash (/var/crash/_usr_share_apport_apport-gtk.127.crash) main process (14667) terminated with status 133 init: gnome-settings-daemon main process (14843) killed by TRAP signal init: gnome-settings-daemon main process ended, respawning init: gnome-settings-daemon main process (14850) killed by TRAP signal init: gnome-settings-daemon main process ended, respawning init: gnome-session (GNOME) main process (14727) killed by TRAP signal init: gnome-settings-daemon main process (14859) killed by TRAP signal init: logrotate main process (14570) killed by TERM signal init: upstart-dbus-session-bridge main process (14683) terminated with status 1 init: Disconnected from notified D-Bus bus

    Read the article

  • Tulsa SharePoint Interest Group - How SharePoint 2010 Business Connectivity Services could change yo

    - by dmccollough
    Bio: Corey Roth is a consultant at Stonebridge specializing in SharePoint solutions in the Oil & Gas Industry. He has ten plus years of experience delivering solutions in the energy, travel, advertising and consumer electronics verticals. Corey has always focused on rapid adoption of new Microsoft technologies including Visual Studio 2010, SharePoint 2010, .NET Framework 4.0, LINQ, and SilverLight. He also contributed greatly to the beta phases of Visual Studio 2005. For his contributions, he was awarded the Microsoft Award for Customer Excellence (ACE). Corey is a graduate of Oklahoma State University. Corey is a member of the .NET Mafia (www.dotnetmafia.com) where he blogs about the latest technology and SharePoint. Abstract: How SharePoint 2010 Business Connectivity Services could change your life - The New BDC How many hours have your wasted building simple ASP.NET applications to do nothing more than simple CRUD operations against a database.  Many tools have made this easier, but now it's so easy, you'll be up and running in minutes.  This session will show you hot easy it is to get started integrating external data from your line of business systems in SharePoint 2010.  You will learn how to register an external content type using SharePoint Designer based upon a database table or web service and then build an external list.  With external lists, you will see how you can perform CRUD operations on your line of business directly from SharePoint without ever having to do manual configuration in XML files.  Finally, we will walk through how to create custom edit forms for your list using InfoPath 2010. Agenda: 6pm - 6:30 Pizza and Mingle - Sponsored by TekSystems 6:30 - 6:45 Announcements 6:45 - 7:45 Presentation! 7:45 - 8:00 Drawings and Door Prizes Location: TCC (Tulsa Community College) Northeast Campus 3727 East Apache Tulsa, OK 74115 918-594-8000 Campus Map | Live | Yahoo | Google | MapQuest Door Prizes: We will be giving away one of each of these: XBox 360 - Halo 3 ODST Telerik Premium Collection ($1300.00 value) ReSharper ($199.00 value) SQLSets ($149.00 value) 64 bit Windows 7 Introducing Windows 7 for Developers Developing Service-Oriented AJAX Applications on the Microsoft Platform Sponsors: Thanks to our sponsors: TekSystems - Thanks for purchasing the Pizza for our meetings. ISOCentric - Thanks for providing us hosting for the groups web site. Tulsa Community College - Thanks for providing us a place to have our meetings. NEVRON - Thanks for providing us prizes to give away. INETA.org - For allowing us to be a Charter Member and providing awesome Speakers! PERPETUUM Software - Thanks for providing us prizes to give away. Telerik - Thanks for providing us prizes to give away. GrapeCity - Thanks for providing us prizes to give away. SQLSets - Thanks for providing us prizes to give away. K2 - Thanks for providing us prizes to give away. Microsoft - For providing us with a lot of support and product giveaways! Orielly books - For providing us with books and discounts. Wrox books - For providing us with books and discounts. Have any special requests? Let us know at this link: http://tinyurl.com/lg5o38. RSVP for this month's meeting by responding to this thread: http://tinyurl.com/yafkzel . (Must be logged in to the site) Be SURE to RSVP no later than Noon on April 12th and you will get an extra entry for the prize drawings! So, do it now, before you forget and miss out! Show up for the first time or bring a new buddy and you both get TWO extra entries!

    Read the article

  • ASP.NET AJAX, jQuery and AJAX Control Toolkit&ndash;the roadmap

    - by Harish Ranganathan
    The opinions mentioned herein are solely mine and do not reflect those of my employer Wanted to post this for a long time but couldn’t.  I have been an ASP.NET Developer for quite sometime and have worked with version 1.1, 2.0, 3.5 as well as the latest 4.0. With ASP.NET 2.0 and Visual Studio 2005, came the era of AJAX and rich UI style web applications.  So, ASP.NET AJAX (codenamed “ATLAS”) was released almost an year later.  This was called as ASP.NET 2.0 AJAX Extensions.  This release was supported further with Visual Studio 2005 Service Pack 1. The initial release of ASP.NET AJAX had 3 components ASP.NET AJAX Library – Client library that is used internally by the server controls as well as scripts that can be used to write hand coded ajax style pages ASP.NET AJAX Extensions – Server controls i.e. ScriptManager,Proxy, UpdatePanel, UpdateProgress and Timer server controls.  Works pretty much like other server controls in terms of development and render client side behavior automatically AJAX Control Toolkit – Set of server controls that extend a behavior or a capability.  Ex.- AutoCompleteExtender The AJAX Control Toolkit was a separate download from CodePlex while the first two get installed when you install ASP.NET AJAX Extensions. With Visual Studio 2008, ASP.NET AJAX made its way into the runtime.  So one doesn’t need to separately install the AJAX Extensions.  However, the AJAX Control Toolkit still remained as a community project that can be downloaded from CodePlex.  By then, the toolkit had close to 30 controls. So, the approach was clear viz., client side programming using ASP.NET AJAX Library and server side model using built-in controls (UpdatePanel) and/or AJAX Control Toolkit. However, with Visual Studio 2008 Service Pack 1, we also added support for the ever increasing popular jQuery library.  That is, you can use jQuery along with ASP.NET and would also get intellisense for jQuery in Visual Studio 2008. Some of you who have played with Visual Studio 2010 Beta and .NET Framework 4 Beta, would also have explored the new AJAX Library which had a lot of templates, live bindings etc.,  But, overall, the road map ahead makes it much simplified. For client side programming using JavaScript for implementing AJAX in ASP.NET, the recommendation is to use jQuery which will be shipped along with Visual Studio and provides intellisense as well. For server side programming one you can use the server controls like UpdatePanel etc., and also the AJAX Control Toolkit which has close to 40 controls now.  The AJAX Control Toolkit still remains as a separate download at CodePlex.  You can download the different versions for different versions of ASP.NET at http://ajaxcontroltoolkit.codeplex.com/ The Microsoft AJAX Library will still be available through the CDN (Content Delivery Network) channels.  You can view the CDN resources at http://www.asp.net/ajaxlibrary/CDN.ashx Similarly even jQuery and the toolkit would be available as CDN resources in case you chose not to download and have them as a part of your application. I think this makes AJAX development pretty simple.  Earlier, having Microsoft AJAX Library as well as jQuery for client side scripting was kind of confusing on which one to use.  With this roadmap, it makes it simple and clear. You can read more on this at http://ajax.asp.net I hope this post provided some clarity on the AJAX roadmap as I could decipher from various product teams. Cheers!!!

    Read the article

  • CRM 2011 - Workflows Vs JavaScripts

    - by Kanini
    In the Contact entity, I have the following attributes Preferred email - A read only field of type Email Personal email 1 - An email field Personal email 2 - An email field Work email 1 - An email field Work email 2 - An email field School email - An email field Other email - An email field Preferred email option - An option set with the following values {Personal email 1, Personal email 2, Work email 1, Work email 2, School email and Other email). None of the above mentioned fields are required. Requirement When user picks a value from Preferred email option, we copy the email address available in that field and apply the same in the Preferred email field. Implementation The Solution Architect suggested that we implement the above requirement as a Workflow. The reason he provided was - most of the times, these values are to be populated by an external website and the data is then fed into CRM 2011 system. So, when they update Preferred email option via a Web Service call to CRM, the WF will run and updated the Preferred email field. My argument / solution What will happen if I do not pick a value from the Preferred email Option Set? Do I set it to any of the email addresses that has a value in it? If so, what if there is more than one of the email address fields are populated, i.e., what if Personal email 1 and Work email 1 is populated but no value is picked in the Option Set? What if a value existed in the Preferred email Option Set and I then change it to NULL? Should the field Preferred email (where the text value of email address is stored) be set to Read Only? If not, what if I have picked Personal email 1 in the Option Set and then edit the Preferred email address text field with a completely new email address If yes, then we are enforcing that the preferred email should be one among Personal email 1, Personal email 2, Work email 1, Work email 2, School email or Other email [My preference would be this] What if I had a value of [email protected] in the personal email 1 field and personal email 2 is empty and choose value of Personal email 1 in the drop down for Preferred email (this will set the Preferred email field to [email protected]) and later, I change the value to Personal email 2 in the Preferred email. It overwrites a valid email address with nothing. I agree that it would be highly unlikely that a user will pick Preferred email as Personal email 2 and not have a value in it but nevertheless it is a possible scenario, isn’t it? What if users typed in a value in Personal email 1 but by mistake picked Personal email 2 in the option set and Personal email 2 field had no value in it. Solution The field Preferred email option should be a required field A JS should run whenever Preferred email option is changed. That JS function should set the relevant email field as required (based on the option chosen) and another JS function should be called (see step 3). A JS function should update the value of Preferred email with the value in the email field (as picked in the option set). The JS function should also be run every time someone updates the actual email field which is chosen in the option set. The guys who are managing the external website should update the Preferred email field - surely, if they can update Preferred email option via a Web Service call, it is easy enough to update the Preferred email right? Question Which is a better method? Should it be written as a JS or a WorkFlow? Also, whose responsibility is it to update the Preferred email field when the data flows from an external website? I am new to CRM 2011 but have around 6 years of experience as a CRM consultant (with other products). I do not come from a development background as I started off as a Application Support Engineer but have picked up development in the last couple of years.

    Read the article

  • MSMQ messages using HTTP just won't get delivered

    - by John Breakwell
    I'm starting off the blog with a discussion of an unusual problem that has hit a couple of my customers this month. It's not a problem you'd expect to bump into and the solution is potentially painful. Scenario You want to make use of the HTTP protocol to send MSMQ messages from one machine to another. You have installed HTTP support for MSMQ and have addressed your messages correctly but they will not leave the outgoing queue. There is no configuration for HTTP support - setup has already done all that for you (although you may want to check the most recent "Installation of the MSMQ HTTP Support Subcomponent" section of MSMQINST.LOG to see if anything DID go wrong) - so you can't tweak anything. Restarting services and servers makes no difference - the messages just will not get delivered. The problem is documented and resolved by Knowledgebase article 916699 "The message may not be delivered when you use the HTTP protocol to send a message to a server that is running Message Queuing 3.0". It is unlikely that you would be able to resolve the problem without the assistance of PSS because there are no messages that can be seen to assist you and only access to the source code exposes the root cause. As this communication is over HTTP, the IIS logs would be a good place to start. POST entries are logged which show that connectivity is working and message delivery is being attempted: #Software: Microsoft Internet Information Services 6.0 #Version: 1.0 #Date: 2006-09-12 12:11:29 #Fields: date time s-sitename s-ip cs-method cs-uri-stem cs-uri-query s-port cs-username c-ip cs(User-Agent) sc-status sc-substatus sc-win32-status 2006-09-12 12:12:12 W3SVC1 10.1.17.219 POST /msmq/private$/test - 80 - 10.2.200.3 - 200 0 0 If you capture the traffic with Network Monitor you can see the POST being sent to the server but you also see a response being returned to the client: HTTP: Response to Client; HTTP/1.1; Status Code = 500 - Internal Server Error "Internal Server Error" means we can probably stop looking at IIS and instead focus on the Message Queuing ISAPI extension (Mqise.dll). MSMQ 3.0 (Windows XP and Windows Server 2003) comes with error logging enabled by default but the log files are in binary format - MSMQ 2.0 generated logging in plain text. The symbolic information needed for formatting the files is not currently publicly available so log files have to be sent in to Microsoft PSS.  Although this does mean raising a support case, formatting the log files to text and returning them to the customer shouldn't take long. Obviously the engineer analyses them for you - I just want to point out that you can see the logging output in text format if you want it. The important entries in the log for this problem are: [7]b48.928 09/12/2006-13:20:44.552 [mqise GetNetBiosNameFromIPAddr] ERROR:Failed to get the NetBios name from the DNS name, error = 0xea [7]b48.928 09/12/2006-13:20:44.552 [mqise RPCToServer] ERROR:RPC call R_ProcessHTTPRequest failed, error code = 1702(RPC_S_INVALID_BINDING) which allow a Microsoft escalation engineer to check the MQISE source code to see what is going wrong. This problem according to the article occurs when the extension tries to bind to the local MSMQ service after the extension receives a POST request that contains an MSMQ message. MSMQ resolves the server name by using the DNS host name but the extension cannot bind to the service because the buffer that MSMQ uses to resolve the server name is too small - server names that are exactly 15 characters long will not fit. RPC exception 0x6a6 (RPC_S_INVALID_BINDING) occurs in the W3wp.exe process but the exception is handled and so you do not receive an error message. The workaround is to rename the MSMQ server to something less than 15 characters. If the problem has only just been noticed in a production environment - an application may have been modified to get through a newly-implemented firewall, for example - then renaming is going to be an issue. Other applications may need to be reinstalled or modified if server names are hard-coded or stored in the registry. The renaming may also break a company naming convention where the name is built up from something like location+department+number. If you want to learn more about MSMQ logging then check out Chapter 15 of the MSMQ FAQ. In fact, even if you DON'T want to learn anything about MSMQ logging you should read the FAQ anyway as there is a huge amount of useful information on known issues and the like.

    Read the article

  • BizTalk 2009 - BizTalk Benchmark Wizard: Installation

    - by StuartBrierley
    As previously detailed, I have completed a single server installation of BizTalk Server 2009 standard on my development laptop; a MacBook Pro Core2Duo running at 2.16Ghz with 2Gb of RAM.  Following this I also posted on my use of the BizTalk Server Best Practices Anaylser and how to configure the BizTalk SQL Server Jobs.  All of which means that I should have some confidence that I have a decent working BizTalk Server 2009 environment, Next I thought that it would be a good idea to try and get some idea of how this setup performs by carrying out some baseline tests that can then be replicated on the test and live servers. The aim of this would be to allow confident predictions to be made of how any solutions developed on a single "server" installation may be expected to perform when deployed to these multi-server BizTalk Server 2009 standard installations. The BizTalk Benchmark Wizard would seem to be the perfect tool for the job. The BizTalk Benchmark Wizard is a ultility that can be used to gain some validation of a BizTalk installation, giving a level of guidance on whether it is performing as might be expected. This utility should be used after BizTalk Server has been installed and before any solutions are deployed to the environment.  This will ensure that you are getting consistent and clean results from the BizTalk Benchmark Wizard. The BizTalk Benchmark Wizard applies load to the BizTalk Server environment under a choice of specific scenarios. During these scenarios performance counter information is collected and assessed against statistics that are appropriate to the BizTalk Server environment: "The executed scenarios may or may not be relative to any realistic scenario, and is only intended for testing. The BizTalk Benchmark Wizard has been developed in relation to the BizTalk Server 2009 Scale Out Testing Study. More information about the study can be found here: http://msdn.microsoft.com/en-us/library/ee377068(BTS.10).aspx" After downloading and installing the wizard you will need set up the Hosts, Instances and Adapter handlers.  This is done by running a script file using the “cscript” detailed below.  To do this you will need to open a command prompt window and navigate to the script folder; assuming the default installation location this would be C:\Program Files\Blogical\BizTalk Benchmark Wizard\Artefacts\BizTalk. In this folder you should find an InstallHosts.vbs file which can be executed using the following parameters: NTGroupName - The name of the Windows NT group. UserName – The name of the user account running the service instances. Password – The password of the user account running the service instances. Receive Host – The name of the server where you want to run the receive host instance.  Send Host - The name of the server where you want to run the sen host instance. Processing Host - The name of the server where you want to run the process host instance. By default the script is set up for 64 bit hosts, so if you are running in 32 bit environment make sure that you change the following line in the script before continuing: from:   objHS.IsHost32BitOnly = False to:    objHS.IsHost32BitOnly = True If you have a single box installation, your script command might look like this: cscript InstallHosts.vbs "BizTalk Application Users" “\MyUser” “MyPassword” “BtsServer1” “BtsServer1” “BtsServer1” If you have a multi server installation, your script command might look like this: cscript InstallHosts.vbs "MyDomain\BizTalk Application Users" “MyDomain\MyUser” “MyPassword” “BtsServer1” “BtsServer2” “BtsServer2” Running this script will create: Three hosts (BBW_RxHost, BBW_TxHost and BBW_PxHost) Three host instances One send and one receive adapter handler for the WCF NetTcp adapter. You will then need to import the BizTalk MSI via the BizTalk Administration Console.  Open the BizTalk Administration Console, point to the “Applications” node and import the BizTalk Benchmark Wizard.msi found in the same folder as the script above. This will create a “BizTalk Benchmark Wizard” application along with all ports and orchestrations needed. To finish the installation you will need to run the BizTalk Benchmark Wizard.msi on all BizTalk servers to add the assemblies to the Global Assembly Cache (GAC). Next I will look at running the BizTalk Benchmark Wizard.

    Read the article

  • My ASP.NET news sources

    - by Jon Galloway
    I just posted about the ASP.NET Daily Community Spotlight. I was going to list a bunch of my news sources at the end, but figured this deserves a separate post. I've been following a lot of development blogs for a long time - for a while I subscribed to over 1500 feeds and read them all. That doesn't scale very well, though, and it's really time consuming. Since the community spotlight requires an interesting ASP.NET post every day of the year, I've come up with a few sources of ASP.NET news. Top Link Blogs Chris Alcock's The Morning Brew is a must-read blog which highlights each day's best blog posts across the .NET community. He covers the entire Microsoft development, but generally any of the top ASP.NET posts I see either have already been listed on The Morning Brew or will be there soon. Elijah Manor posts a lot of great content, which is available in his Twitter feed at @elijahmanor, on his Delicious feed, and on a dedicated website - Web Dev Tweets. While not 100% ASP.NET focused, I've been appreciating Joe Stagner's Weekly Links series, partly since he includes a lot of links that don't show up on my other lists. Twitter Over the past few years, I've been getting more and more of my information from my Twitter network (as opposed to RSS or other means). Twitter is as good as your network, so if getting good information off Twitter sounds crazy, you're probably not following the right people. I already mentioned Elijah Manor (@elijahmanor). I follow over a thousand people on Twitter, so I'm not going to try to pick and choose a list, but one good way to get started building out a Twitter network is to follow active Twitter users on the ASP.NET team at Microsoft: @scottgu (well, not on the ASP.NET team, but their great grand boss, and always a great source of ASP.NET info) @shanselman @haacked @bradwilson @davidfowl @InfinitiesLoop @davidebbo @marcind @DamianEdwards @stevensanderson @bleroy @humancompiler @osbornm @anurse I'm sure I'm missing a few, and I'll update the list. Building a Twitter network that follows topics you're interested in allows you to use other tools like Cadmus to automatically summarize top content by leveraging the collective input of many users. Twitter Search with Topsy You can search Twitter for hashtags (like #aspnet, #aspnetmvc, and #webmatrix) to get a raw view of what people are talking about on Twitter. Twitter's search is pretty poor; I prefer Topsy. Here's an example search for the #aspnetmvc hashtag: http://topsy.com/s?q=%23aspnetmvc You can also do combined queries for several tags: http://topsy.com/s?q=%23aspnetmvc+OR+%23aspnet+OR+%23webmatrix Paper.li Paper.li is a handy service that builds a custom daily newspaper based on your social network. They've turned a lot of people off by automatically tweeting "The SuperDevFoo Daily is out!!!" messages (which can be turned off), but if you're ignoring them because of those message, you're missing out on a handy, free service. My paper.li page includes content across a lot of interests, including ASP.NET: http://paper.li/jongalloway When I want to drill into a specific tag, though, I'll just look at the Paper.li post for that hashtag. For example, here's the #aspnetmvc paper.li page: http://paper.li/tag/aspnetmvc Delicious I mentioned previously that I use Delicious for managing site links. I also use their network and search features. The tag based search is pretty good: Even better, though, is that I can see who's bookmarked these links, and add them to my Delicious network. After having built out a network, I can optimize by doing less searching and more leaching leveraging of collective intelligence. Community Sites I scan DotNetKicks, the weblogs.asp.net combined feed, and the ASP.NET Community page, CodeBetter, Los Techies,  CodeProject,  and DotNetSlackers from time to time. They're hit and miss, but they do offer more of an opportunity for finding original content which others may have missed. Terms of Enrampagement When someone's on a tear, I just manually check their sites more often. I could use RSS for that, but it changes pretty often. I just keep a mental note of people who are cranking out a lot of good content and check their sites more often. What works for you?

    Read the article

  • Customize Team Build 2010 – Part 12: How to debug my custom activities

    In the series the following parts have been published Part 1: Introduction Part 2: Add arguments and variables Part 3: Use more complex arguments Part 4: Create your own activity Part 5: Increase AssemblyVersion Part 6: Use custom type for an argument Part 7: How is the custom assembly found Part 8: Send information to the build log Part 9: Impersonate activities (run under other credentials) Part 10: Include Version Number in the Build Number Part 11: Speed up opening my build process template Part 12: How to debug my custom activities Part 13: Get control over the Build Output Part 14: Execute a PowerShell script Part 15: Fail a build based on the exit code of a console application       Developers are “spoilt” persons who expect to be able to have easy debugging experiences for every technique they work with. So they also expect it when developing custom activities for the build process template. This post describes how you can debug your custom activities without having to develop on the build server itself. Remote debugging prerequisites The prerequisite for these steps are to install the Microsoft Visual Studio Remote Debugging Monitor. You can find information how to install this at http://msdn.microsoft.com/en-us/library/bt727f1t.aspx. I chose for the option to run the remote debugger on the build server from a file share. Debugging symbols prerequisites To be able to start the debugging, you need to have the pdb files on the buildserver together with the assembly. The pdb must have been build with Full Debug Info. Steps In my setup I have a development machine and a build server. To setup the remote debugging, I performed the following steps Locate on your development machine the folder C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE\Remote Debugger Create a share for the Remote Debugger folder. Make sure that the share (and the folder) has the correct permissions so the user on the build server has access to the share. On the build server go to the shared “Remote Debugger” folder Start msvsmon.exe which is located in the folder that represents the platform of the build server. This will open a winform application like   Go back to your development machine and open the BuildProcess solution. Start the Attach to process command (Ctrl+Alt+P) Type in the Qualifier the name of the build server. In my case the user account that has started the msvsmon is another user then the user on my development machine. In that case you have to type the qualifier in the format that is shown in the Remote Debugging Monitor (in my case LOCAL\Administrator@TFSLAB) and confirm it by pressing <Enter> Since the build service is running with other credentials, check the option “Show processes from all users”. Now the Attach to process dialog shows the TFSBuildServiceHost process Set the breakpoint in the activity you want to debug and kick of a build. Be aware that when you attach to the TFSBuildServiceHost that you debug every single build that is run by this windows service, so make sure you don’t debug the build server that is in production! You can download the full solution at BuildProcess.zip. It will include the sources of every part and will continue to evolve.

    Read the article

  • My Mother Bought a Droid

    - by Ben Griswold
    I converted to iPhone two years ago when I left my former employer and my Blackberry behind. The truth is I half-heartedly purchased my iPhone. It was a great looking device, but as far as I was concerned, it was a mere toy compared to my Blackberry.  I remember hiding the toy in my briefcase when attending business meetings because I didn’t consider it to be professional enough.  I’ve since owned all three generations of the iPhone and, well, iPhone seems to have caught on. I still miss the click of the Blackberry keyboard and the blinking red light letting me know that someone/something requires my attention, but I’m officially an iPhone fanboy now.  My mom called last weekend and asked if she should buy an iPhone. I talked her ear off about everything I love about iPhone. I went on for about twenty minutes. I couldn’t help myself. I mentioned everything from my podcast subscriptions to the application which manages my workouts.  I went as far as to say that someday all smart phones will be referred to as iPhones just like all tissues are referred to as Kleenex and all sodas are referred to as Cokes.  I was really on a roll and then I stopped. I had to…the call dropped.  There I was, strategically standing in the far corner of my backyard where I get the most reliable AT&T reception and the call drops in middle of my iPhone pitch.  Folks, I don’t care how good a salesperson you are, it’s tough to recover from a situation like this.   I dialed my mom back and jokingly asked if she was planning to make calls with her new phone. I explained that AT&T is bound to provide better service eventually but I’m not sure she should wait. After all, I have troubles with the network in San Diego and I can only image how bad it would be for her in Western Massachusetts. Mom called back a few days later exclaiming, “I bought a Droid! I love this phone! I haven’t done anything with it but make phone calls, but I love it.”  I had to laugh.  My mom made the right call (pun intended.)  The iPhone is an amazing device, but owners are constantly reminded that its core function (it’s a phone, remember?) is subpar.  If you love gadgets, you’re probably enthralled by iPhone’s many bells and whistles and, relatively speaking, the terrible phone service might not amount to much.  (Maybe it amounts to a rant on your blog.) The overall iPhone offering is so attractive that consumers are willing to wait for AT&T to straighten up their act or wait until Apple grants a choice of carriers.  But I don’t see either of these remedies coming soon. In the interim, I’m willing to take my iPhone for what it is and just continue to enjoy my favorite features while pretending that poor coverage isn’t a big deal. With any luck, more and more reasonable folks will recognize that Android Phones are legitimate players in the smart phone space, they will buy loads of them and there will become plenty of functional phones to borrow when my “phone” is showing zero bars.  Heck, I’m already covered when I visit my mom.

    Read the article

  • Upgrade Office 2003 to 2010 on XP or Run them Side by Side

    - by Mysticgeek
    If you’re still running XP, currently have Office 2003 installed on your machine, and skipped Office 2007, you might want to upgrade to Office 2010. In this guide we will show you the upgrade process or how to run them side by side. In this example we are upgrading from Office 2003 Standard to Office Professional Plus 2010 RTM (Final) on XP Professional. System Requirements To run Office 2010 on your XP machine you have to make sure you have Service Pack 3 and Microsoft Silverlight installed (links below). Or you can just install them through Windows Update. Recommended Hardware 1GHZ CPU or higher 512 MB of RAM or higher 1024×768 Resolution or higher DirectX 9.0c compatible graphics card with 64 MB of memory or higher Installing Office 2010 Simply kick off the Office Professional Plus 2010 installation. Enter in your product key… Agree to the EULA…   Select the Customize button… Setup will detect Office 2003 and allow you to remove all applications, keep them, or select only the ones you want to keep. In this example we’re going to remove Excel and PowerPoint, and keep Outlook and Word 2003. Next, click the Installation Options tab and select Office programs you want to install. Since we’re keeping Outlook 2003 and don’t want to use Outlook 2010, we’re making sure not to install Outlook 2010. However, we want to run Word 2003 and 2010 on the same machine. After you’ve made your selections click the Upgrade button. The installation begins and you’re shown the progress. The amount of time it takes to install will vary between systems. Installation is complete and you can close out of the installer. Now when you go into the Start menu under Microsoft Office, you’ll see both versions of the Office apps available. Here is a shot of Word 2003 and 2010 running together on our XP machine.   Conclusion If you’re moving from Office 2003 to 2010, this allows you to install both versions side by side. It gives you a chance to learn 2010 features, and still work in the familiar 2003 environment when you need to get things done quickly. If you’re having problems installing Office 2010 make sure to check out our article on how to fix problems upgrading Office 2010 beta to RTM (Final) release. Also, if you were using Office 2007 and are currently using the 2010 beta, we have a guide on how to switch back to Office 2007 after the 2010 beta ends. Links XP Service Pack 3 Microsoft Silverlight Details on Office 2010 System Requirements Similar Articles Productive Geek Tips Add Word/Excel 97-2003 Documents Back to the "New" Context Menu After Installing Office 2007Make Word 2007 Always Save in Word 2003 FormatMake Excel 2007 Always Save in Excel 2003 FormatRemove Office 2010 Beta and Reinstall Office 2007How to Find Office 2003 Commands in Office 2010 TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips VMware Workstation 7 Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Enable or Disable the Task Manager Using TaskMgrED Explorer++ is a Worthy Windows Explorer Alternative Error Goblin Explains Windows Error Codes Twelve must-have Google Chrome plugins Cool Looking Skins for Windows Media Player 12 Move the Mouse Pointer With Your Face Movement Using eViacam

    Read the article

  • Setting up SharePoint without Active Directory

    - by eJugnoo
    In order to setup SharePoint without AD, you need to run following PowerShell command on Management Shell after installing SharePoint on your server, but before running Config Wizard: (we don’t want to run this SP farm in stand-alone mode!) 1. New-SPConfigurationDatabase SYNOPSIS     Creates a new configuration database. SYNTAX     New-SPConfigurationDatabase [-DatabaseName] <String> [-DatabaseServer] <String> [[-DirectoryDomain] <String>] [[-DirectoryOrganizationUnit] <String>]     [[-AdministrationContentDatabaseName] <String>] [[-DatabaseCredentials] <PSCredential>] [-FarmCredentials] <PSCredential> [-Passphrase] <SecureString>      [-AssignmentCollection <SPAssignmentCollection>] [<CommonParameters>] DESCRIPTION     The New-SPConfigurationDatabase cmdlet creates a new configuration database on the specified database server. This is the central database for a new SharePoint farm.     For permissions and the most current information about Windows PowerShell for SharePoint Products, see the online documentation (http://go.microsoft.com/fwlink/?LinkId=163185). RELATED LINKS     Backup-SPConfigurationDatabase     Disconnect-SPConfigurationDatabase     Connect-SPConfigurationDatabase     Remove-SPConfigurationDatabase REMARKS     To see the examples, type: "get-help New-SPConfigurationDatabase -examples".     For more information, type: "get-help New-SPConfigurationDatabase -detailed".     For technical information, type: "get-help New-SPConfigurationDatabase -full". NOTE: Use –AdministrationContentDatabaseName switch to pass the name of Admin database you want instead of GUID-based name it automatically creates. Hence, one can pretty much easily control Admin, Config, and Content database names at the time of farm creation. If creating new farm, you can also delete and re-provision any service databases automatically created, from UI, to decide what database names you want. 2. Run SharePoint Configuration Wizard, and you’ll following as already added to farm. Select do not discconect from farm, and proceed… Select the port, and authentication (NTLM in my case). Click next, and wizard will complete the remaining steps of provisioning, including creation of Central Admin Web App on the desired port. Once successful, it will open the Central Admin site and ask you to run Farm Config Wizard. I chose to skip and do things manually, to remain in control of what is happening on the farm. Like creating web-app for site collections, creating the very first site collection, and any other service applications. I needed this to create a public-facing installation of SharePoint Foundation RTM on a server which didn’t have AD. Now I am going to setup FBA, and possibly Live ID Auth as well. I will be also setting up RBS, and multi-tenancy on this farm ,and would post any notes, and findings here… --Sharad

    Read the article

  • Dude, what’s up with POP Forums vNext?

    - by Jeff
    Yeah, it has been awhile. I posted v9.2 back in January, about five months ago. That’s a real change from the release pace I had there for awhile. Let me explain what’s going on. First off, in the interim, I re-launched CoasterBuzz, which required a lot of my attention for about two of those months. That’s a good thing though, because that site is just about the best test bed I could ask for. The other thing is that I committed to make the next version use ASP.NET MVC 4, which is now at the RC stage. I didn’t think much about when they’d hit their RTW point, but RC is good enough for me. To that end, there is enough change in the next version that I recently decided to make it a major version upgrade, and finish up the loose ends and science projects to make it whole. Here’s what’s in store… Mobile views: I sat on this or a long time. Originally, I was going to use jQuery Mobile, and waited and waited for a new release, but in the end, decided against using it. Sometimes buttons would unexplainably not work, I felt like I was fighting it at times, and the CSS just felt too heavy. I rolled my own mobile sugar at a fraction of the size, and I think you’ll find it easy to modify. And it’s Metro-y, of course! Re-do of background services: A number of things run in the background, and I did quite a bit of “reimagining” of that code. It’s the weirdness of running services in a Web site context, because so many folks can’t run a bona fide service on their host’s box. The biggest change here is that these service no longer start up by default. You’ll need to call a new method from global.asax called PopForumsActivation.StartServices(). This is also a precursor to running the app in a Web farm (new data layer and caching is the second part of that). I learned about this the hard way when I had three apps using the forum library code but only one was actually the forum. The services were all running three times as often with race conditions and hits on the same data. That was particularly bad for e-mail. CSS clean up: It’s still not ideal, but it’s getting better. That’s one of those things that comes with integrating to a real site… you discover all of the dumb things you did. The mobile CSS is particularly easier to live with. Bug fixes: There are a whole lot of them. Most were minor, but it’s feeling pretty solid now. So that’s where I am. I’m going to call it v10.0, and I’m going to really put forth some effort toward finishing the mobile experience and getting through the remaining bugs. The roadmap beyond that will likely not be feature oriented, but rather work on some other things, like making it run in Azure, perhaps using SQL CE, a better install experience, etc. As usual, I’ll post the latest here. Stay tuned!

    Read the article

  • Windows Azure Use Case: Agility

    - by BuckWoody
    This is one in a series of posts on when and where to use a distributed architecture design in your organization's computing needs. You can find the main post here: http://blogs.msdn.com/b/buckwoody/archive/2011/01/18/windows-azure-and-sql-azure-use-cases.aspx  Description: Agility in this context is defined as the ability to quickly develop and deploy an application. In theory, the speed at which your organization can develop and deploy an application on available hardware is identical to what you could deploy in a distributed environment. But in practice, this is not always the case. Having an option to use a distributed environment can be much faster for the deployment and even the development process. Implementation: When an organization designs code, they are essentially becoming a Software-as-a-Service (SaaS) provider to their own organization. To do that, the IT operations team becomes the Infrastructure-as-a-Service (IaaS) to the development teams. From there, the software is developed and deployed using an Application Lifecycle Management (ALM) process. A simplified view of an ALM process is as follows: Requirements Analysis Design and Development Implementation Testing Deployment to Production Maintenance In an on-premise environment, this often equates to the following process map: Requirements Business requirements formed by Business Analysts, Developers and Data Professionals. Analysis Feasibility studies, including physical plant, security, manpower and other resources. Request is placed on the work task list if approved. Design and Development Code written according to organization’s chosen methodology, either on-premise or to multiple development teams on and off premise. Implementation Code checked into main branch. Code forked as needed. Testing Code deployed to on-premise Testing servers. If no server capacity available, more resources procured through standard budgeting and ordering processes. Manual and automated functional, load, security, etc. performed. Deployment to Production Server team involved to select platform and environments with available capacity. If no server capacity available, standard budgeting and procurement process followed. If no server capacity available, systems built, configured and put under standard organizational IT control. Systems configured for proper operating systems, patches, security and virus scans. System maintenance, HA/DR, backups and recovery plans configured and put into place. Maintenance Code changes evaluated and altered according to need. In a distributed computing environment like Windows Azure, the process maps a bit differently: Requirements Business requirements formed by Business Analysts, Developers and Data Professionals. Analysis Feasibility studies, including budget, security, manpower and other resources. Request is placed on the work task list if approved. Design and Development Code written according to organization’s chosen methodology, either on-premise or to multiple development teams on and off premise. Implementation Code checked into main branch. Code forked as needed. Testing Code deployed to Azure. Manual and automated functional, load, security, etc. performed. Deployment to Production Code deployed to Azure. Point in time backup and recovery plans configured and put into place.(HA/DR and automated backups already present in Azure fabric) Maintenance Code changes evaluated and altered according to need. This means that several steps can be removed or expedited. It also means that the business function requesting the application can be held directly responsible for the funding of that request, speeding the process further since the IT budgeting process may not be involved in the Azure scenario. An additional benefit is the “Azure Marketplace”, In effect this becomes an app store for Enterprises to select pre-defined code and data applications to mesh or bolt-in to their current code, possibly saving development time. Resources: Whitepaper download- What is ALM?  http://go.microsoft.com/?linkid=9743693  Whitepaper download - ALM and Business Strategy: http://go.microsoft.com/?linkid=9743690  LiveMeeting Recording on ALM and Windows Azure (registration required, but free): http://www.microsoft.com/uk/msdn/visualstudio/contact-us.aspx?sbj=Developing with Windows Azure (ALM perspective) - 10:00-11:00 - 19th Jan 2011

    Read the article

  • Postfix log.... spam attempt?

    - by luri
    I have some weird entries in my mail.log. What I'd like to ask is if postfix is avoiding correctly (according with the main.cf attached below) what seems to be relay attempts, presumably for spamming, or if I can enhance it's security somehow. Feb 2 11:53:25 MYSERVER postfix/smtpd[9094]: connect from catv-80-99-46-143.catv.broadband.hu[80.99.46.143] Feb 2 11:53:25 MYSERVER postfix/smtpd[9094]: warning: non-SMTP command from catv-80-99-46-143.catv.broadband.hu[80.99.46.143]: GET / HTTP/1.1 Feb 2 11:53:25 MYSERVER postfix/smtpd[9094]: disconnect from catv-80-99-46-143.catv.broadband.hu[80.99.46.143] Feb 2 11:56:45 MYSERVER postfix/anvil[9097]: statistics: max connection rate 1/60s for (smtp:80.99.46.143) at Feb 2 11:53:25 Feb 2 11:56:45 MYSERVER postfix/anvil[9097]: statistics: max connection count 1 for (smtp:80.99.46.143) at Feb 2 11:53:25 Feb 2 11:56:45 MYSERVER postfix/anvil[9097]: statistics: max cache size 1 at Feb 2 11:53:25 Feb 2 12:09:19 MYSERVER postfix/smtpd[9302]: connect from vs148181.vserver.de[62.75.148.181] Feb 2 12:09:19 MYSERVER postfix/smtpd[9302]: warning: non-SMTP command from vs148181.vserver.de[62.75.148.181]: GET / HTTP/1.1 Feb 2 12:09:19 MYSERVER postfix/smtpd[9302]: disconnect from vs148181.vserver.de[62.75.148.181] Feb 2 12:12:39 MYSERVER postfix/anvil[9304]: statistics: max connection rate 1/60s for (smtp:62.75.148.181) at Feb 2 12:09:19 Feb 2 12:12:39 MYSERVER postfix/anvil[9304]: statistics: max connection count 1 for (smtp:62.75.148.181) at Feb 2 12:09:19 Feb 2 12:12:39 MYSERVER postfix/anvil[9304]: statistics: max cache size 1 at Feb 2 12:09:19 Feb 2 14:17:02 MYSERVER postfix/smtpd[10847]: connect from unknown[202.46.129.123] Feb 2 14:17:02 MYSERVER postfix/smtpd[10847]: warning: non-SMTP command from unknown[202.46.129.123]: GET / HTTP/1.1 Feb 2 14:17:02 MYSERVER postfix/smtpd[10847]: disconnect from unknown[202.46.129.123] Feb 2 14:20:22 MYSERVER postfix/anvil[10853]: statistics: max connection rate 1/60s for (smtp:202.46.129.123) at Feb 2 14:17:02 Feb 2 14:20:22 MYSERVER postfix/anvil[10853]: statistics: max connection count 1 for (smtp:202.46.129.123) at Feb 2 14:17:02 Feb 2 14:20:22 MYSERVER postfix/anvil[10853]: statistics: max cache size 1 at Feb 2 14:17:02 Feb 2 20:57:33 MYSERVER postfix/smtpd[18452]: warning: 95.110.224.230: hostname host230-224-110-95.serverdedicati.aruba.it verification failed: Name or service not known Feb 2 20:57:33 MYSERVER postfix/smtpd[18452]: connect from unknown[95.110.224.230] Feb 2 20:57:33 MYSERVER postfix/smtpd[18452]: lost connection after CONNECT from unknown[95.110.224.230] Feb 2 20:57:33 MYSERVER postfix/smtpd[18452]: disconnect from unknown[95.110.224.230] Feb 2 21:00:53 MYSERVER postfix/anvil[18455]: statistics: max connection rate 1/60s for (smtp:95.110.224.230) at Feb 2 20:57:33 Feb 2 21:00:53 MYSERVER postfix/anvil[18455]: statistics: max connection count 1 for (smtp:95.110.224.230) at Feb 2 20:57:33 Feb 2 21:00:53 MYSERVER postfix/anvil[18455]: statistics: max cache size 1 at Feb 2 20:57:33 Feb 2 21:13:44 MYSERVER pop3d: Connection, ip=[::ffff:219.94.190.222] Feb 2 21:13:44 MYSERVER pop3d: LOGIN FAILED, user=admin, ip=[::ffff:219.94.190.222] Feb 2 21:13:50 MYSERVER pop3d: LOGIN FAILED, user=test, ip=[::ffff:219.94.190.222] Feb 2 21:13:56 MYSERVER pop3d: LOGIN FAILED, user=danny, ip=[::ffff:219.94.190.222] Feb 2 21:14:01 MYSERVER pop3d: LOGIN FAILED, user=sharon, ip=[::ffff:219.94.190.222] Feb 2 21:14:07 MYSERVER pop3d: LOGIN FAILED, user=aron, ip=[::ffff:219.94.190.222] Feb 2 21:14:12 MYSERVER pop3d: LOGIN FAILED, user=alex, ip=[::ffff:219.94.190.222] Feb 2 21:14:18 MYSERVER pop3d: LOGIN FAILED, user=brett, ip=[::ffff:219.94.190.222] Feb 2 21:14:24 MYSERVER pop3d: LOGIN FAILED, user=mike, ip=[::ffff:219.94.190.222] Feb 2 21:14:29 MYSERVER pop3d: LOGIN FAILED, user=alan, ip=[::ffff:219.94.190.222] Feb 2 21:14:35 MYSERVER pop3d: LOGIN FAILED, user=info, ip=[::ffff:219.94.190.222] Feb 2 21:14:41 MYSERVER pop3d: LOGIN FAILED, user=shop, ip=[::ffff:219.94.190.222] Feb 3 06:49:29 MYSERVER postfix/smtpd[25834]: warning: 71.6.142.196: hostname db4142196.aspadmin.net verification failed: Name or service not known Feb 3 06:49:29 MYSERVER postfix/smtpd[25834]: connect from unknown[71.6.142.196] Feb 3 06:49:29 MYSERVER postfix/smtpd[25834]: lost connection after CONNECT from unknown[71.6.142.196] Feb 3 06:49:29 MYSERVER postfix/smtpd[25834]: disconnect from unknown[71.6.142.196] Feb 3 06:52:49 MYSERVER postfix/anvil[25837]: statistics: max connection rate 1/60s for (smtp:71.6.142.196) at Feb 3 06:49:29 Feb 3 06:52:49 MYSERVER postfix/anvil[25837]: statistics: max connection count 1 for (smtp:71.6.142.196) at Feb 3 06:49:29 Feb 3 06:52:49 MYSERVER postfix/anvil[25837]: statistics: max cache size 1 at Feb 3 06:49:29 I have Postfix 2.7.1-1 running on Ubuntu 10.10. This is my (modified por privacy) main.cf: smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) biff = no append_dot_mydomain = no readme_directory = no smtpd_tls_cert_file = /etc/ssl/certs/smtpd.crt smtpd_tls_key_file = /etc/ssl/private/smtpd.key myhostname = mymailserver.org alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases myorigin = /etc/mailname mydestination = mymailserver.org, MYSERVER, localhost relayhost = mynetworks = 127.0.0.0/8, 192.168.1.0/24 mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = all inet_protocols = all home_mailbox = Maildir/ smtpd_recipient_restrictions = permit_sasl_authenticated,permit_mynetworks,reject_unauth_destination mailbox_command = smtpd_sasl_local_domain = smtpd_sasl_auth_enable = yes smtpd_sasl_security_options = noanonymous broken_sasl_auth_clients = yes smtpd_tls_security_level = may smtpd_tls_auth_only = no smtp_tls_note_starttls_offer = yes smtpd_tls_CAfile = /etc/ssl/certs/cacert.pem smtpd_tls_loglevel = 1 smtpd_tls_received_header = yes smtpd_tls_session_cache_timeout = 3600s tls_random_source = dev:/dev/urandom smtp_tls_security_level = may

    Read the article

  • ArcGIS–Getting the Legend Labels out

    - by Avner Kashtan
    Working with ESRI’s ArcGIS package, especially the WPF API, can be confusing. There’s the REST API, the SOAP APIs, and the WPF classes themselves, which expose some web service calls and information, but not everything. With all that, it can be hard to find specific features between the different options. Some functionality is handed to you on a silver platter, while some is maddeningly hard to implement. Today, for instance, I was working on adding a Legend control to my map-based WPF application, to explain the different symbols that can appear on the map. This is how the legend looks on ESRI’s own map-editing tools:   but this is how it looks when I used the Legend control, supplied out of the box by ESRI:   Very pretty, but unfortunately missing the option to display the name of the fields that make up the symbology. Luckily, the WPF controls have a lot of templating/extensibility points, to allow you to specify the layout of each field: 1: <esri:Legend> 2: <esri:Legend.MapLayerTemplate> 3: <DataTemplate> 4: <TextBlock Text="{Binding Layer.ID}"/> 5: </DataTemplate> 6: </esri:Legend.MapLayerTemplate> 7: </esri:Legend> but that only replicates the same built in behavior. I could now add any additional fields I liked, but unfortunately, I couldn’t find them as part of the Layer, GraphicsLayer or FeatureLayer definitions. This is the part where ESRI’s lack of organization is noticeable, since I can see this data easily when accessing the ArcGis Server’s web-interface, but I had no idea how to find it as part of the built-in class. Is it a part of Layer? Of LayerInfo? Of the LayerDefinition class that exists only in the SOAP service? As it turns out, neither. Since these fields are used by the symbol renderer to determine which symbol to draw, they’re actually a part of the layer’s Renderer. Since I already had a MyFeatureLayer class derived from FeatureLayer that added extra functionality, I could just add this property to it: 1: public string LegendFields 2: { 3: get 4: { 5: if (this.Renderer is UniqueValueRenderer) 6: { 7: return (this.Renderer as UniqueValueRenderer).Field; 8: } 9: else if (this.Renderer is UniqueValueMultipleFieldsRenderer) 10: { 11: var renderer = this.Renderer as UniqueValueMultipleFieldsRenderer; 12: return string.Join(renderer.FieldDelimiter, renderer.Fields); 13: } 14: else return null; 15: } For my scenario, all of my layers used symbology derived from a single field or, as in the examples above, from several of them. The renderer even kindly supplied me with the comma to separate the fields with. Now it was a simple matter to get the Legend control in line – assuming that it was bound to a collection of MyFeatureLayer: 1: <esri:Legend> 2: <esri:Legend.MapLayerTemplate> 3: <DataTemplate> 4: <StackPanel> 5: <TextBlock Text="{Binding Layer.ID}"/> 6: <TextBlock Text="{Binding Layer.LegendFields}" Margin="10,0,0,0" TextStyle="Italic"/> 7: </StackPanel> 8: </DataTemplate> 9: </esri:Legend.MapLayerTemplate> 10: </esri:Legend> and get the look I wanted – the list of fields below the layer name, indented.

    Read the article

  • 24 hours to pass until 24 Hours of PASS

    - by Rob Farley
    There’s a bunch of stuff going on at the moment in the SQL world, so if you’ve missed this particular piece of news, let me tell you a bit about it. Twice a year, the SQL community puts on its biggest virtual event – 24 Hours of PASS. And the next one is tomorrow – March 21st, 2012. Twenty-four sessions, back-to-back, featuring a selection of some of the best presenters in the SQL world, speakers from all over the world, coming together in an online collaboration that so far has well over thirty thousand registrations across the presentations. Some people are signed up for all 24 sessions, some only one. Traditionally, LiveMeeting has been used as the platform for this event, but this year we’re going with a new platform – IBTalk. It promises big, and we’re hoping it won’t let us down. LiveMeeting has been great, and we thank Microsoft for providing it as a platform for the past few years. However, as the event has grown, we’ve found that a new idea is necessary. Last year a search was done for a new platform, and IBTalk ticked the right boxes. The feedback from the presenters and moderators so far has been overwhelmingly positive, and we’re hoping that this is going to really enhance the user experience. One of my favourite features of the platform is the language side. It provides a pretty good translation service. Users who join a session will see a flag on the left of the screen. If they click it, they can change the language to one of 15 on offer. Picking this changes all the labels on everything. It even translates the text in the Q&A window. What this means is that someone from Brazil can ask their question in Portuguese, and the presenter will see it in English. Then if the answer is typed in English, the questioner will be able to see the answer, also in Portuguese. Or they can switch to English to see it as the answerer typed it. I know there’s always the risk of bad translations going on, but I’ve heard good things about this translation service. But there’s more – IBTalk are providing staff to type up closed captioning live during the event. So if English isn’t your first language, don’t worry! Picking your language will also let you see subtitles in your chosen language. I’m hoping that this event is the start of PASS being able to reach people from all corners of the world. Wouldn’t it be great to find that this event is successful, and that the next 24HOP (later in the year, our Summit Preview event) has just as many non-English speakers tuning in as English speakers? If you haven’t been planning which sessions you’re going to attend, you really should get over to sqlpass.org/24hours and have a look through what’s on offer. There’s some amazing material from some of the industry’s brightest, covering a wide range of topics, from classic SQL areas to the brand new SQL 2012 features. There really should be something for every SQL professional. Check the time zones though – if you’re in the US you might be on Summer time, and an hour closer to GMT than normal. Massive thanks must go to Microsoft, SQL Sentry and Idera for sponsoring this event. Without sponsors we wouldn’t be able to put any of this on. These companies are helping 24HOP continue to grow into an event for the whole world. See you tomorrow! @rob_farley | #24hop | #sqlpass

    Read the article

  • 24 hours to pass until 24 Hours of PASS

    - by Rob Farley
    There’s a bunch of stuff going on at the moment in the SQL world, so if you’ve missed this particular piece of news, let me tell you a bit about it. Twice a year, the SQL community puts on its biggest virtual event – 24 Hours of PASS. And the next one is tomorrow – March 21st, 2012. Twenty-four sessions, back-to-back, featuring a selection of some of the best presenters in the SQL world, speakers from all over the world, coming together in an online collaboration that so far has well over thirty thousand registrations across the presentations. Some people are signed up for all 24 sessions, some only one. Traditionally, LiveMeeting has been used as the platform for this event, but this year we’re going with a new platform – IBTalk. It promises big, and we’re hoping it won’t let us down. LiveMeeting has been great, and we thank Microsoft for providing it as a platform for the past few years. However, as the event has grown, we’ve found that a new idea is necessary. Last year a search was done for a new platform, and IBTalk ticked the right boxes. The feedback from the presenters and moderators so far has been overwhelmingly positive, and we’re hoping that this is going to really enhance the user experience. One of my favourite features of the platform is the language side. It provides a pretty good translation service. Users who join a session will see a flag on the left of the screen. If they click it, they can change the language to one of 15 on offer. Picking this changes all the labels on everything. It even translates the text in the Q&A window. What this means is that someone from Brazil can ask their question in Portuguese, and the presenter will see it in English. Then if the answer is typed in English, the questioner will be able to see the answer, also in Portuguese. Or they can switch to English to see it as the answerer typed it. I know there’s always the risk of bad translations going on, but I’ve heard good things about this translation service. But there’s more – IBTalk are providing staff to type up closed captioning live during the event. So if English isn’t your first language, don’t worry! Picking your language will also let you see subtitles in your chosen language. I’m hoping that this event is the start of PASS being able to reach people from all corners of the world. Wouldn’t it be great to find that this event is successful, and that the next 24HOP (later in the year, our Summit Preview event) has just as many non-English speakers tuning in as English speakers? If you haven’t been planning which sessions you’re going to attend, you really should get over to sqlpass.org/24hours and have a look through what’s on offer. There’s some amazing material from some of the industry’s brightest, covering a wide range of topics, from classic SQL areas to the brand new SQL 2012 features. There really should be something for every SQL professional. Check the time zones though – if you’re in the US you might be on Summer time, and an hour closer to GMT than normal. Massive thanks must go to Microsoft, SQL Sentry and Idera for sponsoring this event. Without sponsors we wouldn’t be able to put any of this on. These companies are helping 24HOP continue to grow into an event for the whole world. See you tomorrow! @rob_farley | #24hop | #sqlpass

    Read the article

  • Archiving SQLHelp tweets

    - by jamiet
    #SQLHelp is a Twitter hashtag that can be used by any Twitter user to get help from the SQL Server community. I think its fair to say that in its first year of being it has proved to be a very useful resource however Kendra Little (@kendra_little) made a very salient point yesterday when she tweeted: Is there a way to search the archives of #sqlhelp Trying to remember answer to a question I know I saw a couple months ago http://twitter.com/#!/Kendra_Little/status/15538234184441856 This highlights an inherent problem with Twitter’s search capability – it simply does not reach far enough back in time. I have made steps to remedy that situation by putting into place two initiatives to archive Tweets that contain the #sqlhelp hashtag. The Archivist http://archivist.visitmix.com/ is a free service that, quite simply, archives a history of tweets that contain a given search term by periodically polling Twitter’s search service with that search term and subsequently displaying a dashboard providing an aggregate view of those tweets for things like tweet volume over time, top users and top words (Archivist FAQ). I have set up an archive on The Archivist for “sqlhelp” which you can view at http://archivist.visitmix.com/jamiet/7. Here is a screenshot of the SQLHelp dashboard 36 minutes after I set it up: There is lots of good information in there, including the fact that Jonathan Kehayias (@SQLSarg) is the most active SQLHelp tweeter (I suspect as an answerer rather than a questioner ) and that SSIS has proven to be a rather (ahem) popular subject!! Datasift The Archivist has its uses though for our purposes it has a couple of downsides. For starters you cannot search through an archive (which is what Kendra was after) and nor can you export the contents of the archive for offline analysis. For those functions we need something a bit more heavyweight and for that I present to you Datasift. Datasift is a tool (currently an alpha release) that allows you to search for tweets and provide them through an object called a Datasift stream. That sounds very similar to normal Twitter search though it has one distinct advantage that other Twitter search tools do not – Datasift has access to Twitter’s Streaming API (aka the Twitter Firehose). In addition it has access to a lot of other rather nice features: It provides the Datasift API that allows you to consume the output of a Datasift stream in your tool of choice (bring on my favourite ultimate mashup tool J ) It has a query language (called Filtered Stream Definition Language – FSDL for short) A Datasift stream can consume (and filter) other Datasift streams Datasift can (and does) consume services other than Twitter If I refer to Datasift as “ETL for tweets” then you may get some sort of idea what it is all about. Just as I did with The Archivist I have set up a publicly available Datasift stream for “sqlhelp” at http://datasift.net/stream/1581/sqlhelp. Here is the FSDL query that provides the data: twitter.text contains "sqlhelp" Pretty simple eh? At the current time it provides little more than a rudimentary dashboard but as Datasift is currently an alpha release I think this may be worth keeping an eye on. The real value though is the ability to consume the output of a stream via Datasift’s RESTful API, observe: http://api.datasift.net/stream.xml?stream_identifier=c7015255f07e982afdeebdf1ae6e3c0d&username=jamiet&api_key=XXXXXXX (Note that an api_key is required during the alpha period so, given that I’m not supplying my api_key, this URI will not work for you) Just to prove that a Datasift stream can indeed consume data from another stream I have set up a second stream that further filters the first one for tweets containing “SSIS”. That one is at http://datasift.net/stream/1586/ssis-sqlhelp and here is the FSDL query: rule "414c9845685ff8d2548999cf3162e897" and (interaction.content contains "ssis") When Datasift moves beyond alpha I’ll re-assess how useful this is going to be and post a follow-up blog. @Jamiet

    Read the article

< Previous Page | 922 923 924 925 926 927 928 929 930 931 932 933  | Next Page >