Search Results

Search found 2678 results on 108 pages for 'lcd fire'.

Page 97/108 | < Previous Page | 93 94 95 96 97 98 99 100 101 102 103 104  | Next Page >

  • PSU failing or Mainboard failing?

    - by Andrei Rinea
    I am having some troubles lately powering on my desktop workstation. While starting up the PC after being off for hours (usually at least 8 hours) it randomly fails to do so. What happens is that : I press the power button; nothing happens I can hear a moderate buzzing noise at the back of the PC (near the PSU); but I can't say for sure that it's not from the mainboard. If I insist pressing the power button a few times in 1-2 minutes it'll start Another route would be that instead of (3) I will plug off the power cable from the PSU and wait for 30 seconds. Then I will press the power on and keep it for 30-60 seconds (I had some success at notebooks with a similar approach). Then I will plug back the cable in the PSU, press only once the power button and it will start normally. Also while running normally I keep hearing some low buzzing which seems to be fan-RPM-related (i.e. when processing images or doing CPU intensive work). What should I look into? UPDATE It's getting worse. It took more than 10 retries today and almost 20 minutes to start the computer. I tried the paperclip trick and the PSU behaves perfectly. I managed to start the computer like so : I pressed the on-button a few times and then left the PC in a pre-startup state (the fans were working the buzzing noise was strong and I went to eat. I thought I won't lit the house on fire so fast and without smelling. Back, after 10-15 min the computer booted up! Discussed with a fellow at Intel and he told me the capacitors on the mainboard are probably a bit shot. If they are shot, he said, it should start up warm perfectly. So I did restart it, warm, a few times (5 sec cooldown and then 40 sec cooldown and it started up perfectly). I can either replace the capacitors on the mainboard (doesn't sound worth it or replace the mainboard (this one sucks too :)) ) FINAL INFO : It was the PSU after all. Although it was powering the IDEs and SATAs the Mainboard power module was failing. I bought another mainboard just to find out that this wasn't the cause. Now I'll have to return it somehow. The spare PSU is now in the computer and doing well.. Although larger (500W), it's like a plane taking off.. I need a better one.

    Read the article

  • Preventing 'Reply-All' to Exchange Distribution Groups

    - by Larold
    This is another question in a short series regarding a challenging Exchange project my co-workers have been asked to implement. (I'm helping even though I'm primarily a Unix guy because I volunteered to learn powershell and implement as much of the project in code as I could.) Background: We have been asked to create many distribution groups, say about 500+. These groups will contain two types of members. (Apologies if I get these terms wrong.) One type will be internal AD users, and the other type will be external users that I create Mail Contact entries for. We have been asked to make it so that a "Reply All" is not possible to any messages sent to these groups. I don't believe that is 100% possible to enforce for the following reasons. My question is - is my following reasoning sound? If not, please feel free to educate me on if / how things can properly be implemeneted. Thanks! My reasoning on why it's impossible to prevent 100% of potential reply-all actions: An interal AD user could put the DL in their To: field. They then click the '+' to expand the group. The group contains two external mail contacts. The message is sent to everyone, including those external contacts. External user #1 decides to reply-all, and his mail goes to, at least, external user #2, which wouldn't even involve our Exchange mail relays. An internal AD user could place the DL in their Outlook To: field, then click the '+' button to expand the DL. They then fire off an email to everyone that was in the group. (But the individual addresses are listed in the 'To:' field.) Because we now have a message sent to multiple recipients in the To: field, the addresses have been "exposed", and anyone is free to reply-all, and the messages just get sent to everyone in the To: field. Even if we try to set a Reply-To: field for all of these DLs, external mail clients are not obligated to abide by it, or force users to abide by it. Are my two points above valid? (I admit, they are somewhat similar.) Am I correct to tell our leadership "It is not possible to prevent 100% of the cases where someone will want to Reply-All to these groups UNLESS we train the users sending emails to these groups that the Bcc: field is to be used at all times." I am dying for any insight or parts of the equation I'm not seeing clearly. Thank you!!!

    Read the article

  • mac + parallels and https site test = router restarts

    - by Erik
    Ok I have an interesting & very frustrating problem happening. I'm going to explain it the best I can. I work as a graphic designer & web designer on a mac and have a Comcast internet connection that comes through a Comcast branded router (SMC8014) which then ties into an Airport Extreme Base Station which runs my office network. I run OS 10.5.7 and also run Parallels 4.0.3 (running Windows XP) for testing websites in Internet Explorer and so on. Ok, so that the basic background. Here's my issue. I've been collaborating an ecommerce website with another designer/developer and when testing the site on the PC side we have started to run into some sort of network problem. The site is https if that matters at all I suspect it may. Basically when I run parallels for testing I am constantly having to restart the router in order to connect to the test site (it's hosted). Funny thing is I can access the rest of the internet fine, just not this site I'm working on until I restart the router (It's sorta like the site is timing out). This never happens when just running the Mac side of things. It only becomes an issue when Parallels is open and I am doing page refreshes while making css or HTML edits via something like Coda or CSS edit (connected to the hosting server via ftp). The real problem is that once the problem starts I only get about 2 or 3 page loads before I have to restart the router again. It's absolutely crippling. I cannot get any work done when I have to restart the router every couple of minutes. So, if you think this problem is isolated to me, the answer is no. The designer/developer I'm collaborating with has an office a couple miles away and experiences very similar problems under slightly different setup. He also has Comcast as his internet provider and connects his router to an Airport and primarily works on a mac. The main difference is that rather than using a visualizer like parallels to test the website on the PC he uses a real live PC that is on his network. Once he fire up the PC to do testing he runs into the same issue described above. After a couple of page refreshes in Internet Explorer or other browser on the PC the site becomes unresponsive and the router has to get restarted. Any thoughts on what is going on here would be greatly appreciated. Thanks in advance.

    Read the article

  • File copying software to do this kind of work... in Windows 7 32 bit

    - by Senthil
    I need a software (Windows 7 32bit) to help me in this process: I have my documents, music, video clips, movies, pictures in my hard disk. These will not be scattered around the system. But will be inside C:\Senthil\ At the end of every week, I want to plug in an external hard disk and run a software that should make sure whatever is inside C:\Senthil\ is also present in the external disk. Files deleted from C:\Senthil\ should be deleted there, and new files should be copied etc... at the end of the process, every bit inside the source folder in my internal disk should be inside my external disk. A couple of important requirements and points: I do NOT need multiple versions or historic versions. I don't need the previous versions of my files. I only want the latest copy to be present in my "backup". Incremental backup makes sense. If files were not touched since the last backup, it need not copy. The size of my folder will run into GBs and in a year or two will go into TBs. But I will make sure the size of the external HDD is equal to or bigger than my source folder. I do not want it to run automatically because when I accidentally delete a file in my source, it will delete the one in the backup (I know this is why we have versioning facilities). I just want to be able to run it manually so that I am in control of when the backup is made and what is backed up and I should be able to pick something from the backup and restore it to the source folder in the above situation. Is there any software that will let me do exactly this? I don't want any other "smart" facility of the software to interfere with this process. I know what I want and the software can keep its smartness to itself :D The main reason I am asking this question is, I am a software developer and I can write this software myself. But I am a little constrained by time at the moment and I want to know if there is an existing program that can do this. Kindly don't worry about earthquakes or fire or snowstorms and bring up the "in case of a natural disaster your backup will also be in the damage zone and will be lost" argument because: I will have bigger things to worry about than my holiday memories. I don't think I will digitally store any life-ruining documents. This backup is only to avoid the inconvenience of obtaining a new copy of stuff that I have. Not to protect them against the end of the world. I am more worried about power surges in my area frying my system, hard disk failure, children who merrily hit Delete or teens who hit Shift + Delete or myself getting a little careless at times! In short: Is there a file/folder syncing software that listens to what I say and doesn't try to act smart? Please forgive me if I sound arrogant :)

    Read the article

  • Installing and configuring Zend Framework 2 server-wide [Ubuntu] and test driving ZendSkeletonApplication

    - by kinologik
    I'm trying to have ZF2 installed for all my subdomains at once (Ubuntu 12.04). ZF2 just launched its first stable version, so I wanted to install it on my development server and finally get my hands dirty with it. I downloaded ZF2 and unzipped the files in /var/ZF2/ (which now contains Zend/[all components]). I then edited /etc/php5/apache2/php.ini and added the path to the ZF2 files: include_path = ".:/var/ZF2" I then downloaded the ZendSkeletonApplication and unzipped it in /var/www/skeleton. I know it is suggested to composer.phar to install ZF2 application, but: I don't want to make a local installation of ZF2... I want to make a server-wide installation be able to use my Zend components on all my domains/subdomains on my development server. Before using any automatic installation process, I'd really like to understand that process by doing it manually at first. Obviously, something goes wrong when I fire ZendSkeletonApplication, and I get the following when hit the following URL: http://www.myDevServer.com/skeleton/public/ Fatal error: Uncaught exception 'RuntimeException' with message 'Unable to load ZF2. Run `php composer.phar install` or define a ZF2_PATH environment variable.' in /var/www/skeleton/init_autoloader.php:48 Stack trace: #0 /var/www/skeleton/public/index.php(9): include() #1 {main} thrown in /var/www/skeleton/init_autoloader.php on line 48 I have skimmed through the docs, tutorials and the like, but there are no straight forward answer to this kind of configuration. In the official doc, in the (very short) installation chapter, I see a reference to adding an include path in PHP. But no example... http://zf2.readthedocs.org/en/latest/ref/installation.html Once you have a copy of Zend Framework available, your application needs to be able to access the framework classes found in the library folder. Though there are several ways to achieve this, your PHP include_path needs to contain the path to Zend Framework’s library. But then, when I get to the "Getting Started" chapter, it's all composer.phar and nothing else... http://zf2.readthedocs.org/en/latest/user-guide/skeleton-application.html I'm no sysAdmin, just a Zend enthusiast. I'm pretty sure this PEBKAC problem might be obvious for those who already got in ZF2 previous betas. Thanks for helping my out. EDIT: Problem was resolved, thanks to Daniel M. Just setting up ZF2_PATH in httpd.conf was all that was needed. SetEnv ZF2_PATH /var/ZF2 I also removed the include_path reference in php.ini and everything works just fine. So I have no idea why Zend suggested to include it there in their official docs.

    Read the article

  • .NET Winform AJAX Login Services

    - by AdamSane
    I am working on a Windows Form that connects to a ASP.NET membership database and I am trying to use the AJAX Login Service. No matter what I do I keep on getting 404 errors on the Authentication_JSON_AppService.axd call. Web Config Below <?xml version="1.0"?> <!-- Note: As an alternative to hand editing this file you can use the web admin tool to configure settings for your application. Use the Website->Asp.Net Configuration option in Visual Studio. A full list of settings and comments can be found in machine.config.comments usually located in \Windows\Microsoft.Net\Framework\v2.x\Config --> <configuration> <configSections> <sectionGroup name="system.web.extensions" type="System.Web.Configuration.SystemWebExtensionsSectionGroup, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"> <sectionGroup name="scripting" type="System.Web.Configuration.ScriptingSectionGroup, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"> <section name="scriptResourceHandler" type="System.Web.Configuration.ScriptingScriptResourceHandlerSection, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" requirePermission="false" allowDefinition="MachineToApplication"/> <sectionGroup name="webServices" type="System.Web.Configuration.ScriptingWebServicesSectionGroup, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"> <section name="jsonSerialization" type="System.Web.Configuration.ScriptingJsonSerializationSection, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" requirePermission="false" allowDefinition="Everywhere"/> <section name="profileService" type="System.Web.Configuration.ScriptingProfileServiceSection, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" requirePermission="false" allowDefinition="MachineToApplication"/> <section name="authenticationService" type="System.Web.Configuration.ScriptingAuthenticationServiceSection, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" requirePermission="false" allowDefinition="MachineToApplication"/> <section name="roleService" type="System.Web.Configuration.ScriptingRoleServiceSection, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" requirePermission="false" allowDefinition="MachineToApplication"/> </sectionGroup> </sectionGroup> </sectionGroup> </configSections> <connectionStrings <!-- Removed --> /> <appSettings/> <system.web> <!-- Set compilation debug="true" to insert debugging symbols into the compiled page. Because this affects performance, set this value to true only during development. --> <compilation debug="true"> <assemblies> <add assembly="System.Core, Version=3.5.0.0, Culture=neutral, PublicKeyToken=B77A5C561934E089"/> <add assembly="System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> <add assembly="System.Xml.Linq, Version=3.5.0.0, Culture=neutral, PublicKeyToken=B77A5C561934E089"/> <add assembly="System.Data.DataSetExtensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=B77A5C561934E089"/> <add assembly="System.Data.Linq, Version=3.5.0.0, Culture=neutral, PublicKeyToken=B77A5C561934E089"/> </assemblies> </compilation> <membership defaultProvider="dbSqlMembershipProvider"> <providers> <add name="dbSqlMembershipProvider" type="System.Web.Security.SqlMembershipProvider, System.Web, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" connectionStringName="Fire.Common.Properties.Settings.dbFireConnectionString" enablePasswordRetrieval="false" enablePasswordReset="true" requiresQuestionAndAnswer="true" applicationName="/" requiresUniqueEmail="false" passwordFormat="Hashed" maxInvalidPasswordAttempts="5" minRequiredPasswordLength="7" minRequiredNonalphanumericCharacters="1" passwordAttemptWindow="10" passwordStrengthRegularExpression=""/> </providers> </membership> <roleManager enabled="true" defaultProvider="dbSqlRoleProvider"> <providers> <add connectionStringName="Fire.Common.Properties.Settings.dbFireConnectionString" applicationName="/" name="dbSqlRoleProvider" type="System.Web.Security.SqlRoleProvider, System.Web, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a"/> </providers> </roleManager> <authentication mode="Forms"> <forms loginUrl="Login.aspx" cookieless="UseCookies" protection="All" timeout="30" requireSSL="false" slidingExpiration="true" defaultUrl="default.aspx" enableCrossAppRedirects="false"/> </authentication> <authorization> <allow users="*"/> <allow users="?"/> </authorization> <customErrors mode="Off"> </customErrors> <pages> <controls> <add tagPrefix="asp" namespace="System.Web.UI" assembly="System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> <add tagPrefix="asp" namespace="System.Web.UI.WebControls" assembly="System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> </controls> </pages> <httpHandlers> <remove verb="*" path="*.asmx"/> <add verb="*" path="*_AppService.axd" validate="false" type="System.Web.Script.Services.ScriptHandlerFactory, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> <add verb="GET,HEAD" path="ScriptResource.axd" validate="false" type="System.Web.Handlers.ScriptResourceHandler, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> <add verb="*" path="*.asmx" validate="false" type="System.Web.Script.Services.ScriptHandlerFactory, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> </httpHandlers> <httpModules> <add name="ScriptModule" type="System.Web.Handlers.ScriptModule, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> </httpModules> </system.web> <location path="~/Admin"> <system.web> <authorization> <allow roles="Admin"/> <allow roles="System"/> <deny users="*"/> </authorization> </system.web> </location> <location path="~/Admin/System"> <system.web> <authorization> <allow roles="System"/> <deny users="*"/> </authorization> </system.web> </location> <location path="~/Export"> <system.web> <authorization> <allow roles="Export"/> <deny users="*"/> </authorization> </system.web> </location> <location path="~/Field"> <system.web> <authorization> <allow roles="Field"/> <deny users="*"/> </authorization> </system.web> </location> <location path="~/Default.aspx"> <system.web> <authorization> <allow roles="Admin"/> <allow roles="System"/> <allow roles="Export"/> <allow roles="Field"/> <deny users="?"/> </authorization> </system.web> </location> <location path="~/Login.aspx"> <system.web> <authorization> <allow users="*"/> <allow users="?"/> </authorization> </system.web> </location> <location path="~/App_Themes"> <system.web> <authorization> <allow users="*"/> <allow users="?"/> </authorization> </system.web> </location> <location path="~/Includes"> <system.web> <authorization> <allow users="*"/> <allow users="?"/> </authorization> </system.web> </location> <location path="~/WebServices"> <system.web> <authorization> <allow users="*"/> <allow users="?"/> </authorization> </system.web> </location> <location path="~/Authentication_JSON_AppService.axd"> <system.web> <authorization> <allow users="*"/> <allow users="?"/> </authorization> </system.web> </location> <system.codedom> <compilers> <compiler language="c#;cs;csharp" extension=".cs" type="Microsoft.CSharp.CSharpCodeProvider,System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" warningLevel="4"> <providerOption name="CompilerVersion" value="v3.5"/> <providerOption name="WarnAsError" value="false"/> </compiler> <compiler language="vb;vbs;visualbasic;vbscript" extension=".vb" type="Microsoft.VisualBasic.VBCodeProvider, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" warningLevel="4"> <providerOption name="CompilerVersion" value="v3.5"/> <providerOption name="OptionInfer" value="true"/> <providerOption name="WarnAsError" value="false"/> </compiler> </compilers> </system.codedom> <!-- The system.webServer section is required for running ASP.NET AJAX under Internet Information Services 7.0. It is not necessary for previous version of IIS. --> <system.webServer> <validation validateIntegratedModeConfiguration="false"/> <modules> <remove name="ScriptModule"/> <add name="ScriptModule" preCondition="managedHandler" type="System.Web.Handlers.ScriptModule, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> </modules> <handlers> <remove name="WebServiceHandlerFactory-Integrated"/> <remove name="ScriptHandlerFactory"/> <remove name="ScriptHandlerFactoryAppServices"/> <remove name="ScriptResource"/> <add name="ScriptHandlerFactory" verb="*" path="*.asmx" preCondition="integratedMode" type="System.Web.Script.Services.ScriptHandlerFactory, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> <add name="ScriptHandlerFactoryAppServices" verb="*" path="*_AppService.axd" preCondition="integratedMode" type="System.Web.Script.Services.ScriptHandlerFactory, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> <add name="ScriptResource" verb="GET,HEAD" path="ScriptResource.axd" preCondition="integratedMode" type="System.Web.Handlers.ScriptResourceHandler, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> </handlers> </system.webServer> <runtime> <assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1"> <dependentAssembly> <assemblyIdentity name="System.Web.Extensions" publicKeyToken="31bf3856ad364e35"/> <bindingRedirect oldVersion="1.0.0.0-1.1.0.0" newVersion="3.5.0.0"/> </dependentAssembly> <dependentAssembly> <assemblyIdentity name="System.Web.Extensions.Design" publicKeyToken="31bf3856ad364e35"/> <bindingRedirect oldVersion="1.0.0.0-1.1.0.0" newVersion="3.5.0.0"/> </dependentAssembly> </assemblyBinding> </runtime> </configuration>

    Read the article

  • Parallelism in .NET – Part 16, Creating Tasks via a TaskFactory

    - by Reed
    The Task class in the Task Parallel Library supplies a large set of features.  However, when creating the task, and assigning it to a TaskScheduler, and starting the Task, there are quite a few steps involved.  This gets even more cumbersome when multiple tasks are involved.  Each task must be constructed, duplicating any options required, then started individually, potentially on a specific scheduler.  At first glance, this makes the new Task class seem like more work than ThreadPool.QueueUserWorkItem in .NET 3.5. In order to simplify this process, and make Tasks simple to use in simple cases, without sacrificing their power and flexibility, the Task Parallel Library added a new class: TaskFactory. The TaskFactory class is intended to “Provide support for creating and scheduling Task objects.”  Its entire purpose is to simplify development when working with Task instances.  The Task class provides access to the default TaskFactory via the Task.Factory static property.  By default, TaskFactory uses the default TaskScheduler to schedule tasks on a ThreadPool thread.  By using Task.Factory, we can automatically create and start a task in a single “fire and forget” manner, similar to how we did with ThreadPool.QueueUserWorkItem: Task.Factory.StartNew(() => this.ExecuteBackgroundWork(myData) ); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } This provides us with the same level of simplicity we had with ThreadPool.QueueUserWorkItem, but even more power.  For example, we can now easily wait on the task: // Start our task on a background thread var task = Task.Factory.StartNew(() => this.ExecuteBackgroundWork(myData) ); // Do other work on the main thread, // while the task above executes in the background this.ExecuteWorkSynchronously(); // Wait for the background task to finish task.Wait(); TaskFactory simplifies creation and startup of simple background tasks dramatically. In addition to using the default TaskFactory, it’s often useful to construct a custom TaskFactory.  The TaskFactory class includes an entire set of constructors which allow you to specify the default configuration for every Task instance created by that factory.  This is particularly useful when using a custom TaskScheduler.  For example, look at the sample code for starting a task on the UI thread in Part 15: // Given the following, constructed on the UI thread // TaskScheduler uiScheduler = TaskScheduler.FromCurrentSynchronizationContext(); // When inside a background task, we can do string status = GetUpdatedStatus(); (new Task(() => { statusLabel.Text = status; })) .Start(uiScheduler); This is actually quite a bit more complicated than necessary.  When we create the uiScheduler instance, we can use that to construct a TaskFactory that will automatically schedule tasks on the UI thread.  To do that, we’d create the following on our main thread, prior to constructing our background tasks: // Construct a task scheduler from the current SynchronizationContext (UI thread) var uiScheduler = TaskScheduler.FromCurrentSynchronizationContext(); // Construct a new TaskFactory using our UI scheduler var uiTaskFactory = new TaskFactory(uiScheduler); If we do this, when we’re on a background thread, we can use this new TaskFactory to marshal a Task back onto the UI thread.  Our previous code simplifies to: // When inside a background task, we can do string status = GetUpdatedStatus(); // Update our UI uiTaskFactory.StartNew( () => statusLabel.Text = status); Notice how much simpler this becomes!  By taking advantage of the convenience provided by a custom TaskFactory, we can now marshal to set data on the UI thread in a single, clear line of code!

    Read the article

  • The How-To Geek Holiday Gift Guide (Geeky Stuff We Like)

    - by The Geek
    Welcome to the very first How-To Geek Holiday Gift Guide, where we’ve put together a list of our absolute favorites to help you weed through all of the junk out there to pick the perfect gift for anybody. Though really, it’s just a list of the geeky stuff we want. We’ve got a whole range of items on the list, from cheaper gifts that most anybody can afford, to the really expensive stuff that we’re pretty sure nobody is giving us. Stocking Stuffers Here’s a couple of ideas for items that won’t break the bank. LED Keychain Micro-Light   Magcraft 1/8-Inch Rare Earth Cube Magnets Best little LED keychain light around. If they don’t need the penknife of the above item this is the perfect gift. I give them out by the handfuls and nobody ever says anything but good things about them. I’ve got ones that are years old and still running on the same battery.  Price: $8   Geeks cannot resist magnets. Jason bought this pack for his fridge because he was sick of big clunky magnets… these things are amazing. One tiny magnet, smaller than an Altoid mint, can practically hold a clipboard right to the fridge. Amazing. I spend more time playing with them on the counter than I do actually hanging stuff.  Price: $10 Lots of Geeky Mugs   Astronomy Powerful Green Laser Pointer There’s loads of fun, geeky mugs you can find on Amazon or anywhere else—and they are great choices for the geek who loves their coffee. You can get the Caffeine mug pictured here, or go with an Atari one, Canon Lens, or the Aperture mug based on Portal. Your choice. Price: $7   No, it’s not a light saber, but it’s nearly bright enough to be one—you can illuminate low flying clouds at night or just blind some aliens on your day off. All that for an extremely low price. Loads of fun. Price: $15       Geeky TV Shows and Books Sometimes you just want to relax and enjoy a some TV or a good book. Here’s a few choices. The IT Crowd Fourth Season   Doctor Who, Complete Fifth Series Ridiculous, funny show about nerds in the IT department, loved by almost all the geeks here at HTG. Justin even makes this required watching for new hires in his office so they’ll get his jokes. You can pre-order the fourth season, or pick up seasons one, two, or three for even cheaper. Price: $13   It doesn’t get any more nerdy than Eric’s pick, the fifth all-new series of Doctor Who, where the Daleks are hatching a new master plan from the heart of war-torn London. There’s also alien vampires, humanoid reptiles, and a lot more. Price: $52 Battlestar Galactica Complete Series   MAKE: Electronics: Learning Through Discovery Watch the epic fight to save the human race by finding the fabled planet Earth while being hunted by the robotic Cylons. You can grab the entire series on DVD or Blu-ray, or get the seasons individually. This isn’t your average sci-fi TV show. Price: $150 for Blu-ray.   Want to learn the fundamentals of electronics in a fun, hands-on way? The Make:Electronics book helps you build the circuits and learn how it all works—as if you had any more time between all that registry hacking and loading software on your new PC. Price: $21       Geeky Gadgets for the Gadget-Loving Geek Here’s a few of the items on our gadget list, though lets be honest: geeks are going to love almost any gadget, especially shiny new ones. Klipsch Image S4i Premium Noise-Isolating Headset with 3-Button Apple Control   GP2X Caanoo MAME/Console Emulator If you’re a real music geek looking for some serious quality in the headset for your iPhone or iPod, this is the pair that Alex recommends. They aren’t terribly cheap, but you can get the less expensive S3 earphones instead if you prefer. Price: $50-100   Eric says: “As an owner of an older version, I can say the GP2X is one of my favorite gadgets ever. Touted a “Retro Emulation Juggernaut,” GP2X runs Linux and may be the only open source software console available. Sounds too good to be true, but isn’t.” Price: $150 Roku XDS Streaming Player 1080p   Western Digital WD TV Live Plus HD Media Player If you do a lot of streaming over Netflix, Hulu Plus, Amazon’s Video on Demand, Pandora, and others, the Roku box is a great choice to get your content on your TV without paying a lot of money.  It’s also got Wireless-N built in, and it supports full 1080P HD. Price: $99   If you’ve got a home media collection sitting on a hard drive or a network server, the Western Digital box is probably the cheapest way to get that content on your TV, and it even supports Netflix streaming too. It’ll play loads of formats in full HD quality. Price: $99 Fujitsu ScanSnap S300 Color Mobile Scanner   Doxie, the amazing scanner for documents Trevor said: “This wonderful little scanner has become absolutely essential to me. My desk used to just be a gigantic pile of papers that I didn’t need at the moment, but couldn’t throw away ‘just in case.’ Now, every few weeks, I’ll run that paper pile through this and then happily shred the originals!” Price: $300   If you don’t scan quite as often and are looking for a budget scanner you can throw into your bag, or toss into a drawer in your desk, the Doxie scanner is a great alternative that I’ve been using for a while. It’s half the price, and while it’s not as full-featured as the Fujitsu, it might be a better choice for the very casual user. Price: $150       (Expensive) Gadgets Almost Anybody Will Love If you’re not sure that one of the more geeky presents is gonna work, here’s some gadgets that just about anybody is going to love, especially if they don’t have one already. Of course, some of these are a bit on the expensive side—but it’s a wish list, right? Amazon Kindle       The Kindle weighs less than a paperback book, the screen is amazing and easy on the eyes, and get ready for the kicker: the battery lasts at least a month. We aren’t kidding, either—it really lasts that long. If you don’t feel like spending money for books, you can use it to read PDFs, and if you want to get really geeky, you can hack it for custom screensavers. Price: $139 iPod Touch or iPad       You can’t go wrong with either of these presents—the iPod Touch can do almost everything the iPhone can do, including games, apps, and music, and it has the same Retina display as the iPhone, HD video recording, and a front-facing camera so you can use FaceTime. Price: $229+, depending on model. The iPad is a great tablet for playing games, browsing the web, or just using on your coffee table for guests. It’s well worth buying one—but if you’re buying for yourself, keep in mind that the iPad 2 is probably coming out in 3 months. Price: $500+ MacBook Air  The MacBook Air comes in 11” or 13” versions, and it’s an amazing little machine. It’s lightweight, the battery lasts nearly forever, and it resumes from sleep almost instantly. Since it uses an SSD drive instead of a hard drive, you’re barely going to notice any speed problems for general use. So if you’ve got a lot of money to blow, this is a killer gift. Price: $999 and up. Stuck with No Idea for a Present? Gift Cards! Yeah, you’re not going to win any “thoughtful present” awards with these, but you might just give somebody what they really want—the new Angry Birds HD for their iPad, Cut the Rope, or anything else they want. ITunes Gift Card   Amazon.com Gift Card Somebody in your circle getting a new iPod, iPhone, or iPad? You can get them an iTunes gift card, which they can use to buy music, games or apps. Yep, this way you can gift them a copy of Angry Birds if they don’t already have it. Or even Cut the Rope.   No clue what to get somebody on your list? Amazon gift cards let them buy pretty much anything they want, from organic weirdberries to big screen TVs. Yeah, it’s not as thoughtful as getting them a nice present, but look at the bright side: maybe they’ll get you an Amazon gift card and it’ll balance out. That’s the highlights from our lists—got anything else to add? Share your geeky gift ideas in the comments. Latest Features How-To Geek ETC The How-To Geek Holiday Gift Guide (Geeky Stuff We Like) LCD? LED? Plasma? The How-To Geek Guide to HDTV Technology The How-To Geek Guide to Learning Photoshop, Part 8: Filters Improve Digital Photography by Calibrating Your Monitor Our Favorite Tech: What We’re Thankful For at How-To Geek The How-To Geek Guide to Learning Photoshop, Part 7: Design and Typography Happy Snow Bears Theme for Chrome and Iron [Holiday] Download Full Command and Conquer: Tiberian Sun Game for Free Scorched Cometary Planet Wallpaper Quick Fix: Add the RSS Button Back to the Firefox Awesome Bar Dropbox Desktop Client 1.0.0 RC for Windows, Linux, and Mac Released Hang in There Scrat! – Ice Age Wallpaper

    Read the article

  • Non-Dom Element Event Binding with jQuery

    - by Rick Strahl
    Yesterday I had a short discussion with Dave Reed on Twitter regarding setting up fake ‘events’ on objects that are hookable. jQuery makes it real easy to bind events on DOM elements and with a little bit of extra work (that I didn’t know about) you can also set up binding to non-DOM element ‘event’ bindings. Assume for a second that you have a simple JavaScript object like this: var item = { sku: "wwhelp" , foo: function() { alert('orginal foo function'); } }; and you want to be notified when the foo function is called. You can use jQuery to bind the handler like this: $(item).bind("foo", function () { alert('foo Hook called'); } ); Binding alone won’t actually cause the handler to be triggered so when you call: item.foo(); you only get the ‘original’ message. In order to fire both the original handler and the bound event hook you have to use the .trigger() function: $(item).trigger("foo"); Now if you do the following complete sequence: var item = { sku: "wwhelp" , foo: function() { alert('orginal foo function'); } }; $(item).bind("foo", function () { alert('foo hook called'); } ); $(item).trigger("foo"); You’ll see the ‘hook’ message first followed by the ‘original’ message fired in succession. In other words, using this mechanism you can hook standard object functions and chain events to them in a way similar to the way you can do with DOM elements. The main difference is that the ‘event’ has to be explicitly triggered in order for this to happen rather than just calling the method directly. .trigger() relies on some internal logic that checks for event bindings on the object (attached via an expando property) which .trigger() searches for in its bound event list. Once the ‘event’ is found it’s called prior to execution of the original function. This is pretty useful as it allows you to create standard JavaScript objects that can act as event handlers and are effectively hookable without having to explicitly override event definitions with JavaScript function handlers. You get all the benefits of jQuery’s event methods including the ability to hook up multiple events to the same handler function and the ability to uniquely identify each specific event instance with post fix string names (ie. .bind("MyEvent.MyName") and .unbind("MyEvent.MyName") to bind MyEvent). Watch out for an .unbind() Bug Note that there appears to be a bug with .unbind() in jQuery that doesn’t reliably unbind an event and results in a elem.removeEventListener is not a function error. The following code demonstrates: var item = { sku: "wwhelp", foo: function () { alert('orginal foo function'); } }; $(item).bind("foo.first", function () { alert('foo hook called'); }); $(item).bind("foo.second", function () { alert('foo hook2 called'); }); $(item).trigger("foo"); setTimeout(function () { $(item).unbind("foo"); // $(item).unbind("foo.first"); // $(item).unbind("foo.second"); $(item).trigger("foo"); }, 3000); The setTimeout call delays the unbinding and is supposed to remove the event binding on the foo function. It fails both with the foo only value (both if assigned only as “foo” or “foo.first/second” as well as when removing both of the postfixed event handlers explicitly. Oddly the following that removes only one of the two handlers works: setTimeout(function () { //$(item).unbind("foo"); $(item).unbind("foo.first"); // $(item).unbind("foo.second"); $(item).trigger("foo"); }, 3000); this actually works which is weird as the code in unbind tries to unbind using a DOM method that doesn’t exist. <shrug> A partial workaround for unbinding all ‘foo’ events is the following: setTimeout(function () { $.event.special.foo = { teardown: function () { alert('teardown'); return true; } }; $(item).unbind("foo"); $(item).trigger("foo"); }, 3000); which is a bit cryptic to say the least but it seems to work more reliably. I can’t take credit for any of this – thanks to Dave Reed and Damien Edwards who pointed out some of these behaviors. I didn’t find any good descriptions of the process so thought it’d be good to write it down here. Hope some of you find this helpful.© Rick Strahl, West Wind Technologies, 2005-2010Posted in jQuery  

    Read the article

  • Gartner PCC Follow-up: Interview with Chaeny Emanavin, Usability Lead - Office of Information Develo

    - by [email protected]
    Last week at the Gartner Portals, Content and Collaboration conference in Baltimore, Chaeny and I co-presented on Oracle Enterprise 2.0 and BIA's Citizen Portal. Chaeny's presentation about the BIA solution was very well received and I wanted to do a follow-up interview with Chaeny to discuss more details about their solution and its Enterprise 2.0 features. Ajay: What were the main objectives for the BIA Citizen Portal? Chaeny: The BIA Citizen Portal is designed to provide all the services of the Bureau of Indian Affairs to the community of 564 federally recognized tribes that include over 1.9 million American Indians and Alaska Natives. The BIA provides the same breadth of services that the entire U.S. Federal Government provides in one small Bureau. So, we needed a solution that was flexible enough to handle content ranging from law enforcement to housing to education. Key objectives for external users was to use the Web as a communications channel and keep them informed on what services are available. We also wanted to build an internal web presence and community for BIA's 5000 employees to ensure that they update their content, leverage internal experts and create single sources of truth for key policy documents. Ajay: How is the project being implemented? Chaeny: We are using a phased approach. In phases 1 & 2, interim internal and external sites were built to ensure usability and functional requirements are being met. In Phases 3 & 4, we built out a modern internal and external presence using Oracle WebCenter Suite and Oracle Universal Content Management (UCM), including enabling delegated content management for our internal business units. Phase 4 was completed in January 2010. Phase 5 will add deeper Enterprise 2.0 collaboration capabilities to the solution. Ajay: Are you integrating any existing sites into the new solution? Chaeny: Yes, we have a SharePoint implementation that we are using for document management. We needed more precise functionality however. We found that SharePoint would let individual administrators of a SharePoint site actually create new sites. In a 3 months span, we had over 200 new sites created and most were not being used. So, we had an enormous sprawl problem. Our requirements mandated increased governance and more granular control over the creation of sites and flexible user access to content. In SharePoint this required custom code and was very time-intensive which was unfeasible given our tight deadlines. We are piloting Oracle WebCenter Spaces as our collaboration solution to mitigate these issues. However, we must integrate our existing SharePoint investment which we can do easily by using the SharePoint connectors available in Oracle WebCenter and UCM. Ajay: What were the key design parameters for your solution? Chaeny: We wanted everything driven by standards and policies. We created a cross-functional steering group called the Indian Affairs Web Council to codify policies that were baked into the system. Other key design areas were focused on security/governance, self-service content management, ease of use, integration with legacy applications and seamless single sign-on. We are using Dublin Core as our metadata standard. We also are using Java, APEX, and ADF as our development standards. Ajay: Why was it important to standardize on a platform? Chaeny: We initially looked at best-of-breed solutions, but we faced a lot of issues getting the different solutions to work together. Going with an integrated solution was more economical, easier to learn and faster to deliver the solution. Ajay: What type of legacy applications are you integrating into the portal? Chaeny: Initially we are starting with administrative apps such as people directory and user admin and then we will integrate HR and Financial applications among others. Ajay: Can you describe some of the E20 collaboration features you are putting into the solution? Chaeny: We are adding Enterprise 2.0 using Oracle WebCenter Spaces to deliver different collaboration tools such as wikis, blogs and discussion forums. Wikis to create rapid, ad hoc monthly roll-up reports; discussion forums to provide context-specific help; blogs to capture tacit organization knowledge from experts, identify gurus and turn tacit knowledge into explicit knowledge. Ajay: Are you doing anything specifically to spur adoption and usage? Chaeny: Yes, we did several things that I think helped us ramp quickly. First, we met our commitments for the new system launch date and also provided extra resources for a customer support "hotline" during the launch period. Prior to launch, we did exhaustive usability studies to capture user requirements around functionality, navigation and other key interaction areas. We also created extensive training programs so that the content managers in each business unit were comfortable using the content management tools and knew the best practices for usage. Finally, to launch the Enterprise 2.0 collaboration capabilities, we are working with a pilot group from the Division of Forestry and Wildland Fire Management of BIA. This group of people in the past have been willing early adopters and they have a strong business need to collaborate with many agencies both internal and external across State, County and other Federal jurisdictions. Their feedback is key to helping us launch Enterprise 2.0 successfully in our broader organization. Ajay: What were the biggest benefits to internal BIA employees and to the external community of users? Chaeny: For our employees, the new Enterprise 2.0-based solution will make it easier to find information; enhance employee productivity by embedding standard business processes into the system and create more of a community by creating connections with experts via social collaboration to ultimately provide better services more quickly. For the external American Indian and Alaska Native communities, we have a better relationship with the users and the new site has improved BIA's perception as a more responsive and customer-centric organization.

    Read the article

  • Silverlight Cream for December 05, 2010 -- #1003

    - by Dave Campbell
    In this (Almost) All-Submittal Issue: John Papa(-2-), Jesse Liberty, Tim Heuer, Dan Wahlin, Markus Egger, Phil Middlemiss, Coding4Fun, Michael Washington, Gill Cleeren, MichaelD!, Colin Eberhardt, Kunal Chowdhury, and Rabeeh Abla. Above the Fold: Silverlight: "Two-Way Binding on TreeView.SelectedItem" Phil Middlemiss WP7: "Taking Screen Shots of Windows Phone 7 Panorama Apps" Markus Egger Training: "Beginners Guide to Visual Studio LightSwitch (Part - 4)" Kunal Chowdhury Shoutouts: Don't let the fire go out... check out the Firestarter Labs Bart Czernicki discusses the need for 64-bit Silverlight: Why a 64-bit runtime for Silverlight 5 Matters Laurent Duveau is interviewed by the SilverlightShow folks to discuss his WP7 app: Laurent Duveau on Morse Code Flash Light WP7 Application From SilverlightCream.com: John Papa: Silverlight 5 Features John Papa has a post up highlighting his take on what's cool in the new featureset for Silverlight 5... including an external link to the keynote. Silverlight Firestarter Keynote and Sessions John Papa also has posted links to all the individual session videos... what a great resource! Yet Another Podcast #17 – Scott Guthrie Jesse Liberty went big with his latest Yet Another Podcast ... he is interviewing Scott Guthrie about the Firestarter, Silverlight, WP7. and more. Silverlight 5 Plans Revealed With this post from Tim Heuer, I find myself adding a Silverlight 5 tag... so bring on the fun! ... unless you've been overloaded like I have since last Thursday, you've probably seen this, but what the heck... Silverlight Firestarter Wrap Up and WCF RIA Services Talk Sample Code Phoenix's own Dan Wahlin had a great WCF RIA Services presentation at the Firestarter last week, and his material and lots of other good links are up on his blog, and I'd say that even if he didn't have a couple shoutouts to me in it :) Thanks Dan!! Taking Screen Shots of Windows Phone 7 Panorama Apps Markus Egger helps us all out with a post on how to get screenshots of your WP7 Panorama app... in case you haven't tried it ... it's not as easy as it sounds! Two-Way Binding on TreeView.SelectedItem Phil Middlemiss is back with a post taking some of the mystery out of the TreeView control bound to a data context and dealing with the SelectedItem property... oh yeah, and throw all that into MVVM! Great tutorial as usual, a cool behavior, and all the source. Native Extensions for Microsoft Silverlight Alan Cobb pointed me to a quick post up on the Coding4Fun site about the NESL (Native Extensions for SilverLight) from Microsoft that give access to some cool features of Windows 7 from Silverlight... I added an NESL tag in case other posts appear on this subject. Silverlight Simple Drag And Drop / Or Browse View Model / MVVM File Upload Control Michael Washington has another great tutorial up at CodeProject that expands on prior work he'd done with drag/drop file upload with this post on integrating an updated browse/upload into ViewModel/MVVM projects, all of which is Blendable. The validation story in Silverlight (Part 1) In good news for all of us, Gill Cleeren has started a tutorial series at SilverlightShow on Silverlight Validation. The first one is up discussing the basics... The Common Framework MichaelD! has a WPF/Silverlight framework up with Facebook Authentication, Xaml-driven IOC, T4 synchronous WCF proxies, and WP7 on the roadmap... source on CodePlex, check it out and give him some feedback. Exploring Reactive Extensions (Rx) through Twitter and Bing Maps Mashups If you've been waiting around to learn Rx, Colin Eberhardt has the post up for you (and me)... great tutorial up on Twitter and Bing Maps Mashups ... and all the code... for the twitter immediate app, and also the UKSnow one we showed last week... check out the demo page, and grab the source! Beginners Guide to Visual Studio LightSwitch (Part - 4) Kunal Chowdhury has the 4th part of his Lightswitch tutorial series up at SilverlightShow. In this one, he shows how to integrate multiple tables into a screen. It is here Take Your Silverlight Application Full Screen & intercept all windows keys !! Rabeeh Abla sent me this link to the blog describing a COM exposed library that intercepts all keys when Silverlight is full-screen. There are a few I hit when I'm going through blogs that Ctrl-W (FF) just won't take down and that annoys me... so this might be a solution if you have that problem... worth a look anyway! Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Nerdstock 2012: A photo review of Microsoft TechEd North America 2012

    - by The Un-T Guy
    Not only could I not fathom that I would ever be attending a tech event of the magnitude of TechEd, neither could any of my co-workers.  As the least technical person in the history of Information Technology ever, I felt as though I were walking into the belly of the beast, fearing I’d not be allowed out until I could write SSIS packages, program in Visual Basic, or at least arm wrestle a DBA.  Most of my fears were unrealized.   But I made it.  I was here.  I even got to wear the Mark of the Geek neck package with schedule, eyeglass cleaners, name badge (company name obfuscated so they don’t fire me), and a pen.  The name  badge was seemingly the key element, as every vendor in the place wanted to scan it to capture name, email address, and numbers to show their bosses back home.  It also let me eat the food and drink the coffee so that’s a fair trade.   A recurring theme throughout the presentations and vendor demos was “the Cloud” and BYOD (bring your own device).  The below was a common site throughout the week, as attendees from all over the world brought their own devices and were able to (seemingly) seamlessly connect to the Worldwide Innerwebs.  Apparently proof that Microsoft and the event organizers were practicing what they were preaching.   “Cavernous” is one way to describe the downstairs facility itself.  “Freaking cavernous” might be more accurate.  Work sessions were held in classrooms on the second and third floors but the real action was happening downstairs.  Microsoft bookstore, blogger hub (shoutout to Geekswithblogs.net), The Wall (sans Pink Floyd, sadly), couches, recharging stations…   …a game zone with pool and air hockey tables, pinball machines, foosball…   …vintage video games…           …and a even giant chess board.  Looked like this guy was opening with the Kaspersky parry.   The blend of technology and fantasy even went so far as to bring childhood favorites to life.  Assuming, of course, your childhood was pre-video games (like mine) and you were stuck with electric football and Rock ‘em Sock ‘em robots:   And, lest the “combatants” become unruly or – God forbid – afternoon snacks were late, Orange County’s finest was on the scene to keep the peace.  On a high-tech mode of transport, of course.   She wasn’t the only one to think this was a swell way to transition from one concourse to the next.  Given the level of support provided by the entire Orange County Convention Center staff, I knew they had to have some secret.   Here’s one entrance to the vendor zone/”Technical Learning Center.”  Couldn’t help but think of them as the remora attached to the Whale Shark that is Microsoft…   …or perhaps planets orbiting the sun. Microsoft is just that huge and it seemed like every vendor in the industry looks forward to partnering with the tech behemoth.   Aside from the free stuff from the vendors, probably the most popular place in the house was the dining area.  Amazing spreads every day, multiple times a day.  While no attendance numbers were available at press time, literally thousands of attendees were fed, and fed well, every day.  And lest you think my post from earlier in the week exaggerated about the backpacks…   …or that I’m exaggerating about the lunch crowds.  This represents only about between 25-30% of the lunch crowd – it was all my camera could capture at once.  No one went away hungry.   The only thing missing was a a vat of Red Bull but apparently organizers went old school, with probably 100 urns of the original energy drink – coffee – all around the venue.   Of course, following lunch and afternoon sessions, some preferred the even older school method of re-energizing.  There were rumors that Microsoft was serving graham crackers and milk in this area.  But they were only rumors.   Cannot overstate the wonderful service provided by the Orange County Convention Center staff.  Coffee, soft drinks, juice, and water were available always.  Buffet meals were delicious with a wide range of healthy options available, in addition to hundreds (at least) special meal requests supported every day.  Ever tried to keep up with an estimated 9,000 hungry and thirsty IT-ers?  These folks did.  Kudos to all of the staff and many thanks!   And while I occasionally poke fun at the Whale Shark, if nothing else this experience convinced me of one thing:  Microsoft knows how to put on a professional event.  Hundreds of informative, professionally delivered sessions, covering a wide range of topics set at varying levels of expertise (some that even I was able to follow), social activities, vendor partnerships…they brought everything you could ask for to inform, educate, and inspire an entire IT industry.   So as I depart the belly of the beast, I can both take pride in the fact that I survived the week and marvel at the brilliance surrounding me.  The IT industry – or at least the segment associated with Microsoft – is in good, professional hands.  And what won’t fit in their hands can be toted in the Microsoft provided backpacks.  Win-win.   Until New Orleans…

    Read the article

  • Information Indepth Newsletter - Linux Edition

    - by Paulo Folgado
    INFORMATION INDEPTH NEWSLETTERLinux Edition February 2011 Stay Connected:  NEWS Now Available: Oracle Linux 6 Get the latest release of Oracle Linux 6, which includes Unbreakable Enterprise Kernel.Download Oracle Linux 6 Read More Customers Succeed by Using Oracle Exadata with Oracle Linux Watch IT executives from Bank of America, Linkshare, and Johns Hopkins as they talk about the business challenges they faced and why they chose to use Oracle Linux along with Oracle Exadata as the solution. Watch Now Video Interview: Oracle Senior Vice President Wim Coekaerts Watch Wim Coekaerts, senior vice president, Linux and Virtualization Engineering, as he talks about use cases for Oracle VM Templates as well as the Unbreakable Enterprise Kernel for Linux.Watch Now Hot Off the Press: Migrate Your IBM AIX Environment to Oracle Linux This new white paper provides recommendations for planning and implementing the migration of applications from an IBM Power System running AIX to Oracle's Sun Fire X4800 Server with Intel Xeon 7560 Processor running Oracle Linux 5.5.Read More  Back to Top BLOGOSPHERE Just Launched: The Oracle Linux Blog Follow our new Oracle Linux blog  to hear the latest updates, product news, upcoming events, and all the latest happenings, directly from the Linux team at Oracle. Back to Top TECH DIVE NEW: Linux/Oracle Solaris CommandComparo Site from Oracle Technology NetworkThis site gives equivalent command syntax in Oracle Solaris 10 and Oracle Enterprise Linux 5 for common administrative tasks--focusing particularly on tasks that have tricky syntax or that you frequently need to double check. It acts as a quick reference for administrators who operate in these two OS environments. Free Download: Oracle Linux Release 5.6Did you know that by using Oracle Linux 5.5 or 5.6 along with the Unbreakable Enterprise Kernel, you can get all the benefits of Linux mainline kernel 2.6.32 and more, right now, without the need to reinstall or migrate to a new operating system such as RHEL6?Read Release NotesDownload Oracle Linux 5.6 LSB 4.0 Certification Completed for Oracle Linux 5.5Oracle Linux 5.5 with Unbreakable Enterprise Kernel successfully completed the LSB 4.0 certification.  Back to Top WEBCASTS Boost Your Linux Performance with Oracle's Enhancements in Infiniband and RDSRegister to hear Director of Kernel Engineering Chris Mason cover scalability and performance improvements in Linux environment. Get the Facts Oracle's Unbreakable Enterprise KernelSVP Wim Coekaerts and Senior Director Monica Kumar cover the facts about and benefits of using Unbreakable Enterprise Kernel.  View Other Webcasts on Demand   Back to Top EVENTS Collaborate 2011April 10-14 Orlando, Florida Cloud Summit Events, WorldwideVarious dates (check the city for date/time of event) Datacenter Efficiency Events WorldwideThese events include Linux and Oracle VM sessions.Various dates (check the city for date/time of event) Virtualization Events in North America Find an Oracle Event  Back to Top EDUCATION Get Oracle Linux Certified from Oracle University Oracle University offers courses in both Oracle Linux and the administration of Oracle Database on Linux.  Back to Top CUSTOMER SPOTLIGHT Pella Corporation Improves IT Performance and Efficiency with Oracle Linux and Oracle VM To improve IT performance and efficiency and lower operational costs, Pella Corporation, has standardized on Oracle VM and Oracle Linux. Read More Disney Store Deploys POS in 330 Stores and 7 Countries on Oracle Linux Disney Store is running 1,500 registers worldwide on a broad Oracle technology software stack including Oracle Database 11g, Oracle Fusion Middleware, and Oracle Linux. Read More Back to Top PARTNER SPOTLIGHT Emulex and Oracle Announce Data Integrity Features The Unbreakable Enterprise Kernel provides data integrity checking between Oracle Database applications and Emulex 8Gb/s LightPulse Fibre Channel Host Bus Adapters. Read More Dell Inc. Dell Inc. tested and validated configurations support Oracle Linux. Back to Top STAY IN TOUCH Follow @ORCL_Linux on Twitter for the latest penguin tweets Bookmark Oracle.com/Linux Read the Oracle Linux blog Back to Top  Oracle Information InDepth newsletters bring targeted news, articles, customer stories, and special offers to business people who want to find out how to streamline enterprise information management, measure results, improve business processes, and communicate a single truth to their constituents. Please send questions or comments to [email protected]. For answers to questions about subscribing, unsubscribing, and managing your Oracle e-mail communications preferences, please see the Oracle E-Mail Communications page. Copyright © 2011, Oracle Corporation and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. This document is provided for information purposes only, and the contents hereof are subject to change without notice. This document is not warranted to be error-free, nor is it subject to any other warranties or conditions, whether expressed orally or implied in law, including implied warranties and conditions of merchantability or fitness for a particular purpose. We specifically disclaim any liability with respect to this document, and no contractual obligations are formed either directly or indirectly by this document. This document may not be reproduced or transmitted in any form or by any means, electronic or mechanical, for any purpose, without our prior written permission. 

    Read the article

  • Inserting and Deleting Sub Rows in GridView

    - by Vincent Maverick Durano
    A user in the forums (http://forums.asp.net) is asking how to insert  sub rows in GridView and also add delete functionality for the inserted sub rows. In this post I'm going to demonstrate how to this in ASP.NET WebForms.  The basic idea to achieve this is we just need to insert row data in the DataSource that is being used in GridView since the GridView rows will be generated based on the DataSource data. To make it more clear then let's build up a sample application. To start fire up Visual Studio and create a WebSite or Web Application project and then add a new WebForm. In the WebForm ASPX page add this GridView markup below:   1: <asp:gridview ID="GridView1" runat="server" AutoGenerateColumns="false" onrowdatabound="GridView1_RowDataBound"> 2: <Columns> 3: <asp:BoundField DataField="RowNumber" HeaderText="Row Number" /> 4: <asp:TemplateField HeaderText="Header 1"> 5: <ItemTemplate> 6: <asp:TextBox ID="TextBox1" runat="server"></asp:TextBox> 7: </ItemTemplate> 8: </asp:TemplateField> 9: <asp:TemplateField HeaderText="Header 2"> 10: <ItemTemplate> 11: <asp:TextBox ID="TextBox2" runat="server"></asp:TextBox> 12: </ItemTemplate> 13: </asp:TemplateField> 14: <asp:TemplateField HeaderText="Header 3"> 15: <ItemTemplate> 16: <asp:TextBox ID="TextBox3" runat="server"></asp:TextBox> 17: </ItemTemplate> 18: </asp:TemplateField> 19: <asp:TemplateField HeaderText="Action"> 20: <ItemTemplate> 21: <asp:LinkButton ID="LinkButton1" runat="server" onclick="LinkButton1_Click" Text="Insert"></asp:LinkButton> 22: </ItemTemplate> 23: </asp:TemplateField> 24: </Columns> 25: </asp:gridview>   Then at the code behind source of ASPX page you can add this codes below:   1: private DataTable FillData() { 2:   3: DataTable dt = new DataTable(); 4: DataRow dr = null; 5:   6: //Create DataTable columns 7: dt.Columns.Add(new DataColumn("RowNumber", typeof(string))); 8:   9: //Create Row for each columns 10: dr = dt.NewRow(); 11: dr["RowNumber"] = 1; 12: dt.Rows.Add(dr); 13:   14: dr = dt.NewRow(); 15: dr["RowNumber"] = 2; 16: dt.Rows.Add(dr); 17:   18: dr = dt.NewRow(); 19: dr["RowNumber"] = 3; 20: dt.Rows.Add(dr); 21:   22: dr = dt.NewRow(); 23: dr["RowNumber"] = 4; 24: dt.Rows.Add(dr); 25:   26: dr = dt.NewRow(); 27: dr["RowNumber"] = 5; 28: dt.Rows.Add(dr); 29:   30: //Store the DataTable in ViewState for future reference 31: ViewState["CurrentTable"] = dt; 32:   33: return dt; 34:   35: } 36:   37: private void BindGridView(DataTable dtSource) { 38: GridView1.DataSource = dtSource; 39: GridView1.DataBind(); 40: } 41:   42: private DataRow InsertRow(DataTable dtSource, string value) { 43: DataRow dr = dtSource.NewRow(); 44: dr["RowNumber"] = value; 45: return dr; 46: } 47: //private DataRow DeleteRow(DataTable dtSource, 48:   49: protected void Page_Load(object sender, EventArgs e) { 50: if (!IsPostBack) { 51: BindGridView(FillData()); 52: } 53: } 54:   55: protected void LinkButton1_Click(object sender, EventArgs e) { 56: LinkButton lb = (LinkButton)sender; 57: GridViewRow row = (GridViewRow)lb.NamingContainer; 58: DataTable dtCurrentData = (DataTable)ViewState["CurrentTable"]; 59: if (lb.Text == "Insert") { 60: //Insert new row below the selected row 61: dtCurrentData.Rows.InsertAt(InsertRow(dtCurrentData, row.Cells[0].Text + "-sub"), row.RowIndex + 1); 62:   63: } 64: else { 65: //Delete selected sub row 66: dtCurrentData.Rows.RemoveAt(row.RowIndex); 67: } 68:   69: BindGridView(dtCurrentData); 70: ViewState["CurrentTable"] = dtCurrentData; 71: } 72:   73: protected void GridView1_RowDataBound(object sender, GridViewRowEventArgs e) { 74: if (e.Row.RowType == DataControlRowType.DataRow) { 75: if (e.Row.Cells[0].Text.Contains("-sub")) { 76: ((LinkButton)e.Row.FindControl("LinkButton1")).Text = "Delete"; 77: } 78: } 79: }   As you can see the code above is pretty straight forward and self explainatory but just to give you a short explaination the code above is composed of three (3) private methods which are the FillData(), BindGridView and InsertRow(). The FillData() method is a method that returns a DataTable and basically creates a dummy data in the DataTable to be used as the GridView DataSource. You can replace the code in that method if you want to use actual data from database but for the purpose of this example I just fill the DataTable with a dummy data on it. The BindGridVew is a method that handles the actual binding of GridVew. The InsertRow() is a method that returns a DataRow. This method handles the insertion of the sub row. Now in the LinkButton OnClick event, we casted the sender to a LinkButton to determine the specific object that fires up the event and get the row values. We then reference the Data from ViewState to get the current data that is being used in the GridView. If the LinkButton text is "Insert" then we will insert new row to the DataSource ( in this case the DataTable) based on the rowIndex if not then Delete the sub row that was added. Here are some screen shots of the output below: On initial load:   After inserting a sub row:   That's it! I hope someone find this post useful!   Technorati Tags: ASP.NET,C#,GridView

    Read the article

  • I Didn&rsquo;t Get You Anything&hellip;

    - by Bob Rhubart
    Nearly every day this blog features a  list posts and articles written by members of the OTN architect community. But with Christmas just days away, I thought a break in that routine was in order. After all, if the holidays aren’t excuse enough for an off-topic post, then the terrorists have won. Rather than buy gifts for everyone -- which, given the readership of this blog and my budget could amount to a cash outlay of upwards of $15.00 – I thought I’d share a bit of holiday humor. I wrote the following essay back in the mid-90s, for a “print” publication that used “paper” as a content delivery system.  That was then. I’m older now, my kids are older, but my feelings toward the holidays haven’t changed… It’s New, It’s Improved, It’s Christmas! The holidays are a time of rituals. Some of these, like the shopping, the music, the decorations, and the food, are comforting in their predictability. Other rituals, like the shopping, the  music, the decorations, and the food, can leave you curled into the fetal position in some dark corner, whimpering. How you react to these various rituals depends a lot on your general disposition and credit card balance. I, for one, love Christmas. But there is one Christmas ritual that really tangles my tinsel: the seasonal editorializing about how our modern celebration of the holidays pales in comparison to that of Christmas past. It's not that the old notions of how to celebrate the holidays aren't all cozy and romantic--you can't watch marathon broadcasts of "It's A Wonderful White Christmas Carol On Thirty-Fourth Street Story" without a nostalgic teardrop or two falling onto your plate of Christmas nachos. It's just that the loudest cheerleaders for "old-fashioned" holiday celebrations overlook the fact that way-back-when those people didn't have the option of doing it any other way. Dashing through the snow in a one-horse open sleigh? No thanks. When Christmas morning rolls around, I'm going to be mighty grateful that the family is going to hop into a nice warm Toyota for the ride over to grandma's place. I figure a horse-drawn sleigh is big fun for maybe fifteen minutes. After that you’re going to want Old Dobbin to haul ass back to someplace warm where the egg nog is spiked and the family can gather in the flickering glow of a giant TV and contemplate the true meaning of football. Chestnuts roasting on an open fire? Sorry, no fireplace. We've got a furnace for heat, and stuffing nuts in there voids the warranty. Any of the roasting we do these days is in the microwave, and I'm pretty sure that if you put chestnuts in the microwave they would become little yuletide hand grenades. Although, if you've got a snoot full of Yule grog, watching chestnuts explode in your microwave might be a real holiday hoot. Some people may see microwave ovens as a symptom of creeping non-traditional holiday-ism. But I'll bet you that if there were microwave ovens around in Charles Dickens' day, the Cratchits wouldn't have had to entertain an uncharacteristically giddy Scrooge for six or seven hours while the goose cooked. Holiday entertaining is, in fact, the one area that even the most severe critic of modern practices would have to admit has not changed since Tim was Tiny. A good holiday celebration, then as now, involves lots of food, free-flowing drink, and a gathering of friends and family, some of whom you are about as happy to see as a subpoena. Just as the Cratchit's Christmas was spent with a man who, for all they knew, had suffered some kind of head trauma, so the modern holiday gathering includes relatives or acquaintances who, because they watch too many talk shows, and/or have poor personal hygiene, and/or fail to maintain scheduled medication, you would normally avoid like a plate of frosted botulism. But in the season of good will towards men, you smile warmly at the mystery uncle wandering around half-crocked with a clump of mistletoe dangling from the bill of his N.R.A. cap. Dickens' story wouldn't have become the holiday classic it has if, having spotted on their doorstep an insanely grinning, raw poultry-bearing, fresh-off-a-rough-night Scrooge, the Cratchits had pulled their shades and pretended not to be home. Which is probably what I would have done. Instead, knowing full well his reputation as a career grouch, they welcomed him into their home, and we have a touching story that teaches a valuable lesson about how the Christmas spirit can get the boss to pump up the payroll. Despite what the critics might say, our modern Christmas isn't all that different from those of long ago. Sure, the technology has changed, but that just means a bigger, brighter, louder Christmas, with lasers and holograms and stuff. It's our modern celebration of a season that even the least spiritual among us recognizes as a time of hope that the nutcases of the world will wake up and realize that peace on earth is a win/win proposition for everybody. If Christmas has changed, it's for the better. We should continue making Christmas bigger and louder and shinier until everybody gets it.  *** Happy Holidays, everyone!   del.icio.us Tags: holiday,humor Technorati Tags: holiday,humor

    Read the article

  • A Rose by Any Other Name..

    - by Geoff N. Hiten
    It is always a good start when you can steal a title line from one of the best writers in the English language.  Let’s hope I can make the rest of this post live up to the opening.  One recurring problem with SQL server is moving databases to new servers.  Client applications use a variety of ways to resolve SQL Server names, some of which are not changed easily <cough SharePoint /cough>.  If you happen to be using default instances on both the source and target SQL Server, then the solution is pretty simple.  You create (or bug the network admin until she creates) two DNS “A” records. One points the old name to the new IP address.  The other creates a new alias for the old server, since the original system name is now redirected.  Note this will redirect ALL traffic from the old server to the new server, including RDP and file share connection attempts.    Figure 1 – Microsoft DNS MMC Snap-In   Figure 2 – DNS New Host Dialog Box Both records are necessary so you can still access the old server via an alternate name. Server Role IP Address Name Alias Source 10.97.230.60 SQL01 SQL01_Old Target 10.97.230.80 SQL02 SQL01 Table 1 – Alias List If you or somebody set up connections via IP address, you deserve to have to go to each app and fix it by hand.  That is the only way to fix that particular foul-up. If have to deal with Named Instances either as a source or a target, then it gets more complicated.  The standard fix is to use the SQL Server Configuration Manager (or one of its earlier incarnations) to create a SQL client alias to redirect the connection.  This can be a pain installing and configuring the app on multiple client servers.  The good news is that SQL Server Configuration Manager AND all of its earlier versions simply write a few registry keys.  Extracting the keys into a .reg file makes centralized automated deployment a snap. If the client is a 32-bit system, you have to extract the native key.  If it is a 64-bit, you have to extract the native key and the WoW (32 bit on 64 bit host) key. First, pick a development system to create the actual registry key.  If you do this repeatedly, you can simply edit an existing registry file.  Create the entry using the SQL Configuration Manager.  You must use a 64-bit system to create the WoW key.  The following example redirects from a named instance “SQL01\SQLUtiluty” to a default instance on “SQL02”.   Figure 3 – SQL Server Configuration Manager - Native Figure 3 shows the native key listing. Figure 4 – SQL Server Configuration Manager – WoW If you think you don’t need the WoW key because your app is 64 it, think again.  SQL Server Management Server is a 32-bit app, as are most SQL test utilities.  Always create both keys for 64-bit target systems. Now that the keys exist, we can extract them into a .reg file. Fire up REGEDIT and browse to the following location:  HKLM\Software\Microsoft\MSSQLServer\Client\ConnectTo.  You can also search the registry for the string value of one of the server names (old or new). Right click on the “ConnectTo” label and choose “Export”.  Save with an appropriate name and location.  The resulting file should look something like this: Figure 5 – SQL01_Alias.reg Repeat the process with the location: HKLM\Software\Wow6432Node\Microsoft\MSSQLServer\Client\ConnectTo Note that if you have multiple alias entries, ALL of the entries will be exported.  In that case, you can edit the file and remove the extra aliases. You can edit the files together into a single file.  Just leave a blank line between new keys like this: Figure 6 – SQL01_Alias_All.reg Of course if you have an automatic way to deploy, it makes sense to have an automatic way to Un-deploy.  To delete a registry key, simply edit the .reg file and replace the target with a “-“ sign like so. Figure 7 – SQL01_Alias_UNDO.reg Now we have the ability to move any database to any server without having to install or change any applications on any client server.  The whole process should be transparent to the applications, which makes planning and coordinating database moves a far simpler task.

    Read the article

  • Connection Pooling is Busted

    - by MightyZot
    A few weeks ago we started getting complaints about performance in an application that has performed very well for many years.  The application is a n-tier application that uses ADODB with the SQLOLEDB provider to talk to a SQL Server database.  Our object model is written in such a way that each public method validates security before performing requested actions, so there is a significant number of queries executed to get information about file cabinets, retrieve images, create workflows, etc.  (PaperWise is a document management and workflow system.)  A common factor for these customers is that they have remote offices connected via MPLS networks. Naturally, the first thing we looked at was the query performance in SQL Profiler.  All of the queries were executing within expected timeframes, most of them were so fast that the duration in SQL Profiler was zero.  After getting nowhere with SQL Profiler, the situation was escalated to me.  I decided to take a peek with Process Monitor.  Procmon revealed some “gaps” in the TCP/IP traffic.  There were notable delays between send and receive pairs.  The send and receive pairs themselves were quite snappy, but quite often there was a notable delay between a receive and the next send.  You might expect some delay because, presumably, the application is doing some thinking in-between the pairs.  But, comparing the procmon data at the remote locations with the procmon data for workstations on the local network showed that the remote workstations were significantly delayed.  Procmon also showed a high number of disconnects. Wireshark traces showed that connections to the database were taking between 75ms and 150ms.  Not only that, but connections to a file share containing images were taking 2 seconds!  So, I asked about a trust.  Sure enough there was a trust between two domains and the file share was on the second domain.  Joining a remote workstation to the domain hosting the share containing images alleviated the time delay in accessing the file share.  Removing the trust had no affect on the connections to the database. Microsoft Network Monitor includes filters that parse TDS packets.  TDS is the protocol that SQL Server uses to communicate.  There is a certificate exchange and some SSL that occurs during authentication.  All of this was evident in the network traffic.  After staring at the network traffic for a while, and examining packets, I decided to call it a night.  On the way home that night, something about the traffic kept nagging at me.  Then it dawned on me…at the beginning of the dance of packets between the client and the server all was well.  Connection pooling was working and I could see multiple queries getting executed on the same connection and ethereal port.  After a particular query, connecting to two different servers, I noticed that ADODB and SQLOLEDB started making repeated connections to the database on different ethereal ports.  SQL Server would execute a single query and respond on a port, then open a new port and execute the next query.  Connection pooling appeared to be broken. The next morning I wrote a test to confirm my hypothesis.  Turns out that the sequence causing the connection nastiness goes something like this: Make a connection to the database. Open a result set that returns enough records to require multiple roundtrips to the server. For each result, query for some other data in the database (this will open a new implicit connection.) Close the inner result set and repeat for every item in the original result set. Close the original connection. Provided that the first result set returns enough data to require multiple roundtrips to the server, ADODB and SQLOLEDB will start making new connections to the database for each query executed in the loop.  Originally, I thought this might be due to Microsoft’s denial of service (ddos) attack protection.  After turning those features off to no avail, I eventually thought to switch my queries to client-side cursors instead of server-side cursors.  Server-side cursors are the default, by the way.  Voila!  After switching to client-side cursors, the disconnects were gone and the above sequence yielded two connections as expected. While the real problem is the amount of time it takes to make connections over these MPLS networks (100ms on average), switching to client-side cursors made the problem go away.  Believe it or not, this is actually documented by Microsoft, and rather difficult to find.  (At least it was while we were trying to troubleshoot the problem!)  So, if you’re noticing performance issues on slower networks, or networks with slower switching, take a look at the traffic in a tool like Microsoft Network Monitor.  If you notice a high number of disconnects, and you’re using fire-hose or server-side cursors, then try switching to client-side cursors and you may see the problem go away. Most likely, Microsoft believes this to be appropriate behavior, because ADODB can’t guarantee that all of the data has been retrieved when you execute the inner queries.  I’m not convinced, though, because the problem remains even after replacing all of the implicit connections with explicit connections and closing those connections in-between each of the inner queries.  In that case, there doesn’t seem to be a reason why ADODB can’t use a single connection from the connection pool to make the additional queries, bringing the total number of connections to two.  Instead ADO appears to make an assumption about the state of the connection. I’ve reported the behavior to Microsoft and am awaiting to hear from the appropriate team, so that I can demonstrate the problem.  Maybe they can explain to us why this is appropriate behavior.  :)

    Read the article

  • Configuring Expert Search in Communicator 14 and SharePoint 2010

    Communicator 14 provides functionality to be able to search for contacts not just by name, but by skill.  For example a customer service agent at an airline can search for other agents with Travel Advisory experience by typing the search criteria into the Communicator search box and performing a search by keyword.  The search results will return users who have specified that skill in their profile on their SharePoint My Site.  This is actually pretty easy to configure, Ill show you how. Create Search and People Search Results Pages in SharePoint Communicator 14 Expert Search works by using the SharePoint 2010 Search Service to search SharePoint for user profiles with matching keywords.  This requires that you have an Enterprise Search site in your site collection which includes the search service and also the People Results pages.  The easiest way to do this is to create a Search Center site in your site collection. Note: I get an error when trying to create an Enterprise Search site in a Team Site in the SharePoint 2010 RTM bits, so I created it as a site collection that is evident in the URLs you see below. In the screenshots below, you can see that the URL of the SharePoint search service in the Search site collection is http://sps2010/sites/search/_vti_bin/search.asmx, and the URL of the People Search Results page is http://sps2010/sites/Search/Pages/peopleresults.aspx. Point Communications Server 14 to Search and People Search Results Pages For Communicator 14 to be able to perform an Expert Search, you need to configure Communications Server 14 to point to the Search Service and People Search Results page URLs. From a server with the OCS Core bits installed, fire up the Communications Server Management Shell and type Get-CsClientPolicy. Scroll down to the bottom of the output, were interested in setting the values of: SPSearchInternalURL SPSearchExternalURL SPSearchCenterInternalURL SPSearchCenterExternalURL SPSearchInternalURL and SPSearchExternalURL correspond to the internal and external URLs of the SharePoint search service in the Search site collection, while SPSearchCenterInternalURL and SPSearchCenterExternalURL correspond to the internal and external URLs of the people search results pages. Well use the Communications Server Management Shell to set the values of these CS policy properties. For simplicity, Im only going to set the internal URLs here. Set-CsClientPolicy SPSearchInternalURL http://sps2010/sites/search/_vti_bin/search.asmx     -SPSearchCenterInternalURL http://sps2010/sites/Search/Pages/peopleresults.aspx Log out and back into Communicator.  You can verify that these settings were applied by running the Get-CsClientPolicy cmdlet again from the Communications Server Management Shell. However, theres another super-secret ninja trick to verify that the settings were applied: Find the Communicator icon in the Windows System Tray Hold down the Ctrl button Click (left) the Communicator icon in the Windows System Tray do not depress the Ctrl button You should now see an extra menu item called Configuration Information, click it. Scroll down and locate the Expert Search URL and SharePoint Search Center URL keys and verify that their values correspond to those you set using the Set-CsClientPolicy PowerShell cmdlet. Configure a Sharepoint User Profile Import Im not going to provide detailed steps here except to say that you need to configure the SharePoint 2010 User Profile  Service Application to import user account details from Active Directory on a scheduled basis. This is a critical step because there are several user profile properties e.g. SipAddress that are only populated by a user profile import.  When performing an Expert Search, Communicator can only render results for users who have a SipAddress specified. Add Skills to User Profiles Navigate to your My Site and click on My Profile.  This page allows you to set many contact details that are searchable in SharePoint.  Were particularly interested in the Ask Me About property of a users profile.  Expert Search searches against this property to find users with matching skills. Configure a SharePoint Search Crawl Ensure that you have a scheduled job to crawl your Local SharePoint Sites content source.  Depending on how you have this configured, it will also crawl the My Site site collection and add user properties such as Ask Me About to the search index. Thats It! SharePoint 2010 provides new social and collaboration features to help users find other users with similar skills or interests. Expert Search extends this functionality directly into Microsoft Communicator 14, allowing you to interact with the users directly from the search results. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Guest Post: Instantiate SharePoint Workflow On Item Deleted

    - by Brian Jackett
    In this post, guest author Lucas Eduardo Silva will walk you through the steps of instantiating a workflow using an item event receiver from a custom list.  The ItemDeleting event will require approval via the workflow. Foreword     As you may have read recently, I injured my right hand and have had it in a cast for the past 3 weeks.  Due to this I planned to reduce my blogging while my hand heals.  As luck would have it, I was actually approached by someone who asked if they could be a guest author on my blog.  I’ve never had a guest author, but considering my injury now seemed like as good a time as ever to try it out. About the Guest Author     Lucas Eduardo Silva (email) works for CPM Braxis, a sibling company to my employer Sogeti in the CapGemini family.  Lucas and I exchanged emails a few times after one of my  recent posts and continued into various topics.  When I posted that I had injured my hand, Lucas mentioned that he had a post idea that he would like to publish and asked if it could be published on my blog.  The below content is the result of that collaboration. The Problem     Lucas has a big problem.  He has a workflow that he wants to fire every time an item is deleted from a custom list. He has already created the association in the "item deleting event", but needs to approve the deletion but the workflow is finishing first. Lucas put an onWorkflowItemChanged wait for the change of status approval, but it is not being hit. The Solution Note: This solution assumes you have the Visual Studio Extensions for Windows SharePoint Services (VSeWSS) installed to access the SharePoint project templates within VIsual Studio. 1 - Create a workflow that will be activated by ItemEventReceiver. 2 - Create the list by Visual Studio clicking in File -> New -> Project. Select SharePoint, then List Definition. 3 - Select the type of document to be created. List, Document Library, Wiki, Tasks, etc.. 4 - Visual Studio creates the file ItemEventReceiver.cs with all possible events in a list. 5 – In the workflow project, open the workflow.xml and copy the ID. 6 - Uncomment the ItemDeleting and insert the following code by replacing the ID that you copied earlier.   //Cancel the Exclusion properties.Cancel = true;   //Activating Exclusion Workflow SPWorkflowManager workflowManager = properties.ListItem.Web.Site.WorkflowManager;   SPWorkflowAssociation wfAssociation = properties.ListItem.ParentList.WorkflowAssociations. GetAssociationByBaseID(new Guid("37b5aea8-792a-4ded-be25-d283d9fe1f9d"));   workflowManager.StartWorkflow(properties.ListItem, wfAssociation, wfAssociation.AssociationData, true);   properties.Status = SPEventReceiverStatus.CancelNoError;   7 - properties.Cancel cancels the event being activated and executes the code that is inside the event. In the example, it cancels the deletion of the item to start the workflow that will be active as an association list with the workflow ID. 8 - Create and deploy the workflow and the list for SharePoint. 9 - Create a list through the model that was created. 10 - Enable the workflow in the list and Congratulations! Every time you try to delete the item the workflow is activated. TIP: If you really want to delete the item after the workflow is done you will have to delete the item by the workflow.   this.workflowProperties.Site.AllowUnsafeUpdates = true; this.workflowProperties.Item.Delete(); this.workflowProperties.List.Update();   Conclusion     In this guest post Lucas took you through the steps of creating an item deletion approval workflow with an event receiver.  This was also the first time I’ve had a guest author on this blog.  Many thanks to Lucas for putting together this content and offering it.  I haven’t decided how I’d handle future guest authors, mostly because I don’t know if there are others who would want to submit content.  If you do have something that you would like to guest author on my blog feel free to drop me a line and we can discuss.  As a disclaimer, there are no guarantees that it will be published though.  For now enjoy Lucas’ post and look for my return to regular blogging soon.         -Frog Out   <Update 1> If you wish to contact Lucas you can reach him at [email protected] </Update 1>

    Read the article

  • User is trying to leave! Set at-least confirm alert on browser(tab) close event!!

    - by kaushalparik27
    This is something that might be annoying or irritating for end user. Obviously, It's impossible to prevent end user from closing the/any browser. Just think of this if it becomes possible!!!. That will be a horrible web world where everytime you will be attacked by sites and they will not allow to close your browser until you confirm your shopping cart and do the payment. LOL:) You need to open the task manager and might have to kill the running browser exe processes.Anyways; Jokes apart, but I have one situation where I need to alert/confirm from the user in any anyway when they try to close the browser or change the url. Think of this: You are creating a single page intranet asp.net application where your employee can enter/select their TDS/Investment Declarations and you wish to at-least ALERT/CONFIRM them if they are attempting to:[1] Close the Browser[2] Close the Browser Tab[3] Attempt to go some other site by Changing the urlwithout completing/freezing their declaration.So, Finally requirement is clear. I need to alert/confirm the user what he is going to do on above bulleted events. I am going to use window.onbeforeunload event to set the javascript confirm alert box to appear.    <script language="JavaScript" type="text/javascript">        window.onbeforeunload = confirmExit;        function confirmExit() {            return "You are about to exit the system before freezing your declaration! If you leave now and never return to freeze your declaration; then they will not go into effect and you may lose tax deduction, Are you sure you want to leave now?";        }    </script>See! you are halfway done!. So, every time browser unloads the page, above confirm alert causes to appear on front of user like below:By saying here "every time browser unloads the page"; I mean to say that whenever page loads or postback happens the browser onbeforeunload event will be executed. So, event a button submit or a link submit which causes page to postback would tend to execute the browser onbeforeunload event to fire!So, now the hurdle is how can we prevent the alert "Not to show when page is being postback" via any button/link submit? Answer is JQuery :)Idea is, you just need to set the script reference src to jQuery library and Set the window.onbeforeunload event to null when any input/link causes a page to postback.Below will be the complete code:<head runat="server">    <title></title>    <script src="jquery.min.js" type="text/javascript"></script>    <script language="JavaScript" type="text/javascript">        window.onbeforeunload = confirmExit;        function confirmExit() {            return "You are about to exit the system before freezing your declaration! If you leave now and never return to freeze your declaration; then they will not go into effect and you may lose tax deduction, Are you sure you want to leave now?";        }        $(function() {            $("a").click(function() {                window.onbeforeunload = null;            });            $("input").click(function() {                window.onbeforeunload = null;            });        });    </script></head><body>    <form id="form1" runat="server">    <div></div>    </form></body></html>So, By this post I have tried to set the confirm alert if user try to close the browser/tab or try leave the site by changing the url. I have attached a working example with this post here. I hope someone might find it helpful.

    Read the article

  • Playing with aspx page cycle using JustMock

    - by mehfuzh
    In this post , I will cover a test code that will mock the various elements needed to complete a HTTP page request and  assert the expected page cycle steps. To begin, i have a simple enumeration that has my predefined page steps: public enum PageStep {     PreInit,     Load,     PreRender,     UnLoad } Once doing so, i  first created the page object [not mocking]. Page page = new Page(); Here, our target is to fire up the page process through ProcessRequest call, now if we take a look inside the method with reflector.net,  the call trace will go like : ProcessRequest –> ProcessRequestWithNoAssert –> SetInstrinsics –> Finallly ProcessRequest. Inside SetInstrinsics ,  it requires calls from HttpRequest, HttpResponse and HttpBrowserCababilities. Using this clue at hand, we can easily know the classes / calls  we need to mock in order to get through the expected call. Accordingly, for  HttpBrowserCapabilities our required mock code will look like: var browser = Mock.Create<HttpBrowserCapabilities>(); // Arrange Mock.Arrange(() => browser.PreferredRenderingMime).Returns("text/html"); Mock.Arrange(() => browser.PreferredResponseEncoding).Returns("UTF-8"); Mock.Arrange(() => browser.PreferredRequestEncoding).Returns("UTF-8"); Now, HttpBrowserCapabilities is get though [Instance]HttpRequest.Browser. Therefore, we create the HttpRequest mock: var request = Mock.Create<HttpRequest>(); Then , add the required get call : Mock.Arrange(() => request.Browser).Returns(browser); As, [instance]Browser.PerferrredResponseEncoding and [instance]Browser.PreferredResponseEncoding  are also set to the request object and to make that they are set properly, we can add the following lines as well [not required though]. bool requestContentEncodingSet = false; Mock.ArrangeSet(() => request.ContentEncoding = Encoding.GetEncoding("UTF-8")).DoInstead(() =>  requestContentEncodingSet = true); Similarly, for response we can write:  var response = Mock.Create<HttpResponse>();    bool responseContentEncodingSet = false;  Mock.ArrangeSet(() => response.ContentEncoding = Encoding.GetEncoding("UTF-8")).DoInstead(() => responseContentEncodingSet = true); Finally , I created a mock of HttpContext and set the Request and Response properties that will returns the mocked version. var context = Mock.Create<HttpContext>();   Mock.Arrange(() => context.Request).Returns(request); Mock.Arrange(() => context.Response).Returns(response); As, Page internally calls RenderControl method , we just need to replace that with our one and optionally we can check if  invoked properly: bool rendered = false; Mock.Arrange(() => page.RenderControl(Arg.Any<HtmlTextWriter>())).DoInstead(() => rendered = true); That’s  it, the rest of the code is simple,  where  i asserted the page cycle with the PageSteps that i defined earlier: var pageSteps = new Queue<PageStep>();   page.PreInit +=delegate { pageSteps.Enqueue(PageStep.PreInit); }; page.Load += delegate { pageSteps.Enqueue(PageStep.Load); }; page.PreRender += delegate { pageSteps.Enqueue(PageStep.PreRender);}; page.Unload +=delegate { pageSteps.Enqueue(PageStep.UnLoad);};   page.ProcessRequest(context);   Assert.True(requestContentEncodingSet); Assert.True(responseContentEncodingSet); Assert.True(rendered);   Assert.Equal(pageSteps.Dequeue(), PageStep.PreInit); Assert.Equal(pageSteps.Dequeue(), PageStep.Load); Assert.Equal(pageSteps.Dequeue(), PageStep.PreRender); Assert.Equal(pageSteps.Dequeue(), PageStep.UnLoad);   Mock.Assert(request); Mock.Assert(response); You can get the test class shown in this post here to give a try by yourself with of course JustMock :-). Enjoy!!

    Read the article

  • Monitoring almost anything with BizTalk 360

    - by Michael Stephenson
    When you work in an integration environment it is common that you will find yourself in a situation where you integrate with some unusual applications or have some unusual dependencies. That is the nature of integration. When you work with BizTalk one of the common problems is that BizTalk often is the place where problems with applications you integrate with are highlighted and these external applications may have poor monitoring solutions. Fortunately if you are a working with a customer who uses BizTalk 360 then it contains a feature called the "Web Endpoint Manager". Typically the web endpoint manager is used to monitor web services that you integrate with and will ping them at appropriate times to make sure they return the expected HTTP status code. When you have an usual situation where you want to monitor something which is key to the success to your solution but you find yourself having to consider a significant custom solution to monitor the external dependency then the Web Endpoint Manager could be your friend. The endpoint manager monitors a url and checks for a certain status code. This means that you can create your own aspx web page and then make BizTalk 360 monitor this web page. Behind the web page you could write any code you wished. An example of this is architecture is shown in the below diagram.     In the custom web page you would implement some custom code to do whatever it is that you want to monitor. In the below code snippet you can see how the Page_Load default method is doing some kind of check then depending on the result of the check it returns a certain HTTP code. protected void Page_Load(object sender, EventArgs e) { var result = CheckSomething();   if (result == "Success") Response.StatusCode = 202; else if (result == "DatabaseError") Response.StatusCode = 510; else if (result == "SystemError") Response.StatusCode = 512; else Response.StatusCode = 513;   }   In BizTalk 360 you would go into the Monitor and Notify tab and then to BizTalk Environment which gives you access to the Web Endpoint Manager. You need an alarm setup which configures how the endpoint will be checked. I'm not going to go through the details of creating the alarm as this is already documented in the BizTalk 360 documentation. One point to note is that in the example I am using I setup a threshold alarm which means that the url is checked about every minute and if there is an error that persists for a period of time then the alarm will raise the alert notification. In my example I configured the alarm to fire if the error persisted for 3 minutes. The below picture shows accessing the endpoint manager.   In the web endpoint manager you would then configure your endpoint to monitor and the HTTP response code which indicates all is working fine. The below picture shows this. I now have my endpoint monitoring setup and BizTalk 360 should be checking my custom endpoint to see that it is available. If I wanted to manually sanity check that the endpoints I have registered are working fine then clicking the Refresh button will show if they are all good or not. If my custom ASP.net page which is checking my dependency gets a problem you will see in the endpoint manager that the status code does not match the expected return code and your endpoints will display in red and you can see the problem. The below picture shows this. If I use specific HTTP response codes for the errors the custom ASP.net page might encounter I can easily interpret these to know what the problem is. Using the alarms and notifications with BizTalk 360 it means when your endpoint goes into an error state you can easily configure email or SMS notifications from BizTalk 360 to tell you that your endpoint is having problems and you can use BizTalk 360 to help correlate what the problem is to allow you to investigate further. Below you can see the email which tells me my endpoint is not working.   When everything returns to normal you will see the status is now fixed and you will see a situation like below where you can see the WebEndpoints are now green and the return code matches what is expected.   Conclusion As you can see it is really easy to plug your own custom ASP.net page into the BizTalk 360 web endpoint monitoring feature. This extension then gives you the power to really extend the monitoring to almost anything you want as long as you can write some .net code to check that the dependency is available and working. It would be interesting to hear of any ideas people have around things they would monitor with this extension. More details on the end point monitor can be found on the following link: http://www.biztalk360.com/tour/monitoring_notifications

    Read the article

  • How does one get rid of fishy behavior in Windows?

    - by Tom Wijsman
    After I had boot my computer this morning there suddenly flooded water from the top of the screen, after which some fishes dropped into it. Now I can barely see what I am doing because the water distorts the view. Sometimes the fish follow the cursor so I need to move it away or wait for the fish to mind their own business. This makes it very annoying to use my system. What have I tried? Reboot the system. This caused the water to deplete from the desktop. Upon reboot, the screen was refilled with water and fishes. Attach another monitor. Same problem, fills that monitor as well and gives me extra fish. Clicking the fish. Makes them turn direction. Right clicking the fish. Changes color of the fish, not really useful. I'm locked out of changing the background or screen saver settings. Hence, I had to post the lady below... Safe mode doesn't save me from the fishes. It does give me another background there, but I can't screenshot easily. Other user accounts experience this as well. The Guest account seems to experience more fish than the other accounts. Using HijackThis, OTL Timekeeper List, Syninternal Autoruns, RootKitRevealer, ShellExView and similar tools I can't seem to find any entries that could be it, the Sysinternals tools show everything as verified. I'm suspecting this to be a driver problem. Randomly removing drivers doesn't seem to alleviate the problem. When removing the Graphics Drivers, it makes my screen black. While that could be considered the solution, it's not what I want. Changing the time / date settings does also not seem to affect the fishes. Changing the time a few years in the future, I would have expected the fishes to be dead. But, the same fishes are still there... They simply won't die! Tried to get used to them. They are really bothering me, looks like they require food. I don't know how to give them food, but apparently they get it elsewhere during reboot... Tried to disable my mouse pointer and use the keyboard. This works, they now swim around more randomly. They do put their attention to huge changes on the screen, so I need to type slow. Or otherwise I can't see what I'm tying exactly. Hold my laptop upside down. This seems to affect the water and fishes, but the water stays in the screen. They seem super resistant against water sickness and confusion though... What does the problem look like? What do I need? A way to get rid of these fishes on my screen forever, they are really annoying me a lot and I'm about to crack the screen to see if that makes them escape. Do you have any idea why this problem is occurring? What are my considerations? Buying an USB fish tank could make the fish leave the screen, I am uncertain though whether the fish could leave the screen through the USB cable. Using the FISh (programming language) which seems to provide EXPRESSIVE POWER and EFFICIENT EXECUTION, I can however not find any examples on how to remove fish. What are my Specifications? I'm using a Sony Vaio Fishy laptop. Sony VAIO VGN-Fishy, VAIO. Processor: 1337 MHz, Intel Core 2 Duo, T5432, 1 MB, Intel PM965 Express, 667 MHz. Memory: 1024 MB, DDR2-SDRAM, 667 MHz, 2 x 1024 MB, 4 GB. Disk Drive: 50 GB, Serial ATA, 5400 RPM. Storage Media: Memory Stick™, Memory Stick PRO™. Display: 15.4 ", 1280 x 800 pixels, LCD. Video: GeForce 8400M GT, 128 MB. Optical Drive: DVD±R/RW DL, 24 x, 24 x, 24 x, 6 x, 4 x, 6 x, 4 x, 5 x, 5 x, 8 x, 8 x, 8 x, 8 x, 6 x, 6 x, 24 x, 24 x, 24 x, 16 x. Camera: 1.3 MP, 30 fps. Networking: 2.0+EDR. Keyboard: Touchpad, AZERTY. Operating System/Software: Windows Vista Home Premium. Security: Kensington. Weight & Dimensions: 98.8 oz (2800 g), 14 " (355.8 mm), 10 " (254.4 mm), 0.98 " (24.9 mm). Other features: 100 BASE-TX/10 BASE-T, 802.11a/b/g/n/Draft n, V92/V.90, fishes. Plz! Help me...

    Read the article

  • Windows Workflow Foundation (WF) and things I wish were more intuitive

    - by pjohnson
    I've started using Windows Workflow Foundation, and so far ran into a few things that aren't incredibly obvious. Microsoft did a good job of providing a ton of samples, which is handy because you need them to get anywhere with WF. The docs are thin, so I've been bouncing between samples and downloadable labs to figure out how to implement various activities in a workflow. Code separation or not? You can create a workflow and activity in Visual Studio with or without code separation, i.e. just a .cs "Component" style object with a Designer.cs file, or a .xoml XML markup file with code behind (beside?) it. Absence any obvious advantage to one or the other, I used code separation for workflows and any complex custom activities, and without code separation for custom activities that just inherit from the Activity class and thus don't have anything special in the designer. So far, so good. Workflow Activity Library project type - What's the point of this separate project type? So far I don't see much advantage to keeping your custom activities in a separate project. I prefer to have as few projects as needed (and no fewer). The Designer's Toolbox window seems to find your custom activities just fine no matter where they are, and the debugging experience doesn't seem to be any different. Designer Properties - This is about the designer, and not specific to WF, but nevertheless something that's hindered me a lot more in WF than in Windows Forms or elsewhere. The Properties window does a good job of showing you property values when you hover the mouse over the values. But they don't do the same to find out what a control's type is. So maybe if I named all my activities "x1" and "x2" instead of helpful self-documenting names like "listenForStatusUpdate", then I could easily see enough of the type to determine what it is, but any names longer than those and all I get of the type is "System.Workflow.Act" or "System.Workflow.Compone". Even hitting the dropdown doesn't expand any wider, like the debugger quick watch "smart tag" popups do when you scroll through members. The only way I've found around this in VS 2008 is to widen the Properties dialog, losing precious designer real estate, then shrink it back down when you're done to see what you were doing. Really? WF Designer - This is about the designer, and I believe is specific to WF. I should be able to edit the XML in a .xoml file, or drag and drop using the designer. With WPF (at least in VS 2010 Ultimate), these are side by side, and changes to one instantly update the other. With WF, I have to right-click on the .xoml file, choose Open With, and pick XML Editor to edit the text. It looks like this is one way where WF didn't get the same attention WPF got during .NET Fx 3.0 development. Service - In the WF world, this is simply a class that talks to the workflow about things outside the workflow, not to be confused with how the term "service" is used in every other context I've seen in the Windows and .NET world, i.e. an executable that waits for events or requests from a client and services them (Windows service, web service, WCF service, etc.). ListenActivity - Such a great concept, yet so unintuitive. It seems you need at least two branches (EventDrivenActivity instances), one for your positive condition and one for a timeout. The positive condition has a HandleExternalEventActivity, and the timeout has a DelayActivity followed by however you want to handle the delay, e.g. a ThrowActivity. The timeout is simple enough; wiring up the HandleExternalEventActivity is where things get fun. You need to create a service (see above), and an interface for that service (this seems more complex than should be necessary--why not have activities just wire to a service directly?). And you need to create a custom EventArgs class that inherits from ExternalDataEventArgs--you can't create an ExternalDataEventArgs event handler directly, even if you don't need to add any more information to the event args, despite ExternalDataEventArgs not being marked as an abstract class, nor a compiler error nor warning nor any other indication that you're doing something wrong, until you run it and find that it always times out and get to check every place mentioned here to see why. Your interface and service need an event that consumes your custom EventArgs class, and a method to fire that event. You need to call that method from somewhere. Then you get to hope that you did everything just right, or that you can step through code in the debugger before your Delay timeout expires. Yes, it's as much fun as it sounds. TransactionScopeActivity - I had the bright idea of putting one in as a placeholder, then filling in the database updates later. That caused this error: The workflow hosting environment does not have a persistence service as required by an operation on the workflow instance "[GUID]". ...which is about as helpful as "Object reference not set to an instance of an object" and even more fun to debug. Google led me to this Microsoft Forums hit, and from there I figured out it didn't like that the activity had no children. Again, a Validator on TransactionScopeActivity would have pointed this out to me at design time, rather than handing me a nearly useless error at runtime. Easily enough, I disabled the activity and that fixed it. I still see huge potential in my work where WF could make things easier and more flexible, but there are some seriously rough edges at the moment. Maybe I'm just spoiled by how much easier and more intuitive development elsewhere in the .NET Framework is.

    Read the article

  • Android - Switching Activities with a Tab Layout

    - by Bill Osuch
    This post is based on the Tab Layout  tutorial on the Android developers site, with some modifications. I wanted to get rid of the icons (they take up too much screen real estate), and modify the fonts on the tabs. First, create a new Android project, with an Activity called TabWidget. Then, create two additional Activities called TabOne and TabTwo. Throw a simple TextView on each one with a message identifying the tab, like this: public class TabTwo extends Activity {  @Override  public void onCreate(Bundle savedInstanceState) {   super.onCreate(savedInstanceState);   TextView tv = new TextView(this);   tv.setText("This is tab 2");   setContentView(tv);  } } And don't forget to add them to your AndroidManifest.xml file: <activity android:name=".TabOne"></activity> <activity android:name=".TabTwo"></activity> Now we'll create the tab layout - open the res/layout/main.xml file and insert the following: <?xml version="1.0" encoding="utf-8"?> <TabHost xmlns:android="http://schemas.android.com/apk/res/android"  android:id="@android:id/tabhost"  android:layout_width="fill_parent"  android:layout_height="fill_parent">  <LinearLayout   android:orientation="vertical"   android:layout_width="fill_parent"   android:layout_height="fill_parent">   <TabWidget    android:id="@android:id/tabs"    android:layout_width="fill_parent"    android:layout_height="wrap_content" />   <FrameLayout    android:id="@android:id/tabcontent"             android:layout_width="fill_parent"    android:layout_height="fill_parent" />  </LinearLayout> </TabHost> Finally, we'll create the code needed to populate the TabHost. Make sure your TabWidget class extends TabActivity rather than Activity, and add code to grab the TabHost and create an Intent to launch a new Activity:    TabHost tabHost = getTabHost();  // The activity TabHost    TabHost.TabSpec spec;  // Reusable TabSpec for each tab    Intent intent;  // Reusable Intent for each tab       // Create an Intent to launch an Activity for the tab (to be reused)    intent = new Intent().setClass(this, TabOne.class); Add the first tab to the layout:    // Initialize a TabSpec for each tab and add it to the TabHost    spec = tabHost.newTabSpec("tabOne");      spec.setContent(intent);     spec.setIndicator("Tab One");     tabHost.addTab(spec); It's pretty tall as-is, so we'll shorten it:   // Squish the tab a little bit horizontally   tabHost.getTabWidget().getChildAt(0).getLayoutParams().height = 40; But the text is a little small, so let's increase the font size:   // Bump the text size up   LinearLayout ll = (LinearLayout) tabHost.getChildAt(0);   android.widget.TabWidget tw = (android.widget.TabWidget) ll.getChildAt(0);   RelativeLayout rllf = (RelativeLayout) tw.getChildAt(0);   TextView lf = (TextView) rllf.getChildAt(1);   lf.setTextSize(20); Do the same for the second tab, and you wind up with this: @Override     public void onCreate(Bundle savedInstanceState) {         super.onCreate(savedInstanceState);         setContentView(R.layout.main);                 TabHost tabHost = getTabHost();  // The activity TabHost         TabHost.TabSpec spec;  // Reusable TabSpec for each tab         Intent intent;  // Reusable Intent for each tab            // Create an Intent to launch an Activity for the tab (to be reused)         intent = new Intent().setClass(this, TabOne.class);         // Initialize a TabSpec for each tab and add it to the TabHost         spec = tabHost.newTabSpec("tabOne");           spec.setContent(intent);          spec.setIndicator("Tab One");          tabHost.addTab(spec);         // Squish the tab a little bit horizontally         tabHost.getTabWidget().getChildAt(0).getLayoutParams().height = 40;         // Bump the text size up         LinearLayout ll = (LinearLayout) tabHost.getChildAt(0);         android.widget.TabWidget tw = (android.widget.TabWidget) ll.getChildAt(0);         RelativeLayout rllf = (RelativeLayout) tw.getChildAt(0);         TextView lf = (TextView) rllf.getChildAt(1);         lf.setTextSize(20);            // Do the same for the other tabs         intent = new Intent().setClass(this, TabTwo.class);         spec = tabHost.newTabSpec("tabTwo");          spec.setContent(intent);          spec.setIndicator("Tab Two");         tabHost.addTab(spec);         tabHost.getTabWidget().getChildAt(1).getLayoutParams().height = 40;         RelativeLayout rlrf = (RelativeLayout) tw.getChildAt(1);         TextView rf = (TextView) rlrf.getChildAt(1);         rf.setTextSize(20);            tabHost.setCurrentTab(0);     } Save and fire up the emulator, and you should be able to switch back and forth between your tabs!

    Read the article

< Previous Page | 93 94 95 96 97 98 99 100 101 102 103 104  | Next Page >