Search Results

Search found 1217 results on 49 pages for 'kevin teh'.

Page 45/49 | < Previous Page | 41 42 43 44 45 46 47 48 49  | Next Page >

  • ArchBeat Link-o-Rama for 10-24-2012

    - by Bob Rhubart
    Play Oracle Vanquisher Here's a little respite from whatever it is you normally spend your time on. Oracle Vanquisher is an online diversion that makes a game of data center optimization. According to the description: "Armed with a cool Oracle vacuum pack suit and a strategic IT roadmap, you will thwart threats and optimize your data center to increase your company’s stock price and boost your company's position." Mainly you avoid electric shock and killer birds. The current high score belongs to someone identified as "TEN." My score? Never mind. Book: DevOps for Developers | The Java Source The subject of DevOps has come up in a couple of recent OTN ArchBeat Podcasts, so it's somewhat serendipitous that Tori Weildt's recent blog post offers an overview of Java Champion Michael Hutterman's new book, DevOps for Developers, now available from Apress. Bring Your Own Device (BYOD) : Context is everything… | The ORACLE-BASE Blog BOYD is a factor in the evolution of IT, but in what context? "The real IT work in companies is still being done on PCs," says Oracle ACE Director Tim Hall. "Yes, you can use a cloud service on your phone, but look around the office and you will see those cloud services are actually being used by people on PCs." Oracle in the Cloud: Oracle EBusiness Suite sizing | Tom Laszewski Cloud expert Tom Laszewski shares several technical resources that will be helpful for sizing of Oracle EBusiness Suite. Setting Up, Configuring, and Using an Oracle WebLogic Server Cluster Author and expert Yuli Vasiliev shows you how take advantage of multiple Oracle WebLogic Server instances grouped into a cluster to maximize scalability and availability. Webcast: Reduce Costs with Oracle's Database Storage Management Watch this! Join Oracle experts Kevin Jernigan and Margaret Hamburger for an interactive webcast in which you'll learn how Oracle's Database Storage Management can reduce storage costs and management complexity while improving query performance to meet service-level agreements and compliance requirements. Event Date: Tuesday, November 6, 2012 Event Time: 10 a.m. PT/1 p.m. ET Thought for the Day "Most software today is very much like an Egyptian pyramid with millions of bricks piled on top of each other, with no structural integrity, but just done by brute force and thousands of slaves." — Alan Kay Source: softwarequotes.com

    Read the article

  • Current wisdom on SQL Server and Hyperthreading?

    - by BradC
    Lots of articles out there (see Slava Oks's original SQL 2000 article and Kevin Kline's SQL 2005 update) recommend disabling hyperthreading on SQL servers, or at least testing your specific workload before enabling it on your servers. This issue is gradually becoming less relevant as true multi-core processors replace hyperthreaded ones, but what's the current wisdom on this issue? Does this advice change any with SQL 2005 64-bit, or SQL 2008, or Windows Server 2008? Ideally, this should be tested in advance in a staging environment, but what about for servers that have already made it into production with HT enabled? How can I tell if performance issues we're experiencing might be related to HT? Is there some specific combination of perfmon counters that might point me in that direction, as opposed to all the other things I normally pursue when working on improving SQL performance? Edit: This is especially attractive because of the potential for an across the board improvement for some of my high-cpu servers, but the client is going to want to see something concrete that helps me identify which servers really could benefit from disabling hyperthreading. Of course, conventional performance troubleshooting is ongoing, but sometimes any little bit helps.

    Read the article

  • XenServer 5.5 running WHS.. trying to add local or network printer/scanner/copier

    - by ProstheticHead
    Hi guys. Just wondering if anyone has prior experience in sharing a multifunction printer across a (mostly windows) network? My situation at the minute is complicated.. I DID have the printer attached directly to a Windows Home Server box and was able to scan to a share and print across the network from my other computers. Compatibility problems have forced me to virtualize WHS on XenServer 5.5... This is actually quite useful because I can now run other things on the same box, but the problem is that now WHS doesn't get direct hardware access so it doesn't see a USB attached PSC... Grrrr! So now I have a choice to make. I've read somewhere that I can buy an add in PCI USB controller and somehow set up a passthrough to one VM at a time. To me, this sound complicated but if it's likely to work reliably I'd prefer this method. I've read about another approach, which I'm not sure about either, but I guess sounds plausible. A Network USB server, (NOT a print server) that can somehow make a USB port accessible across a network. My worry here is that it likely needs some kind of 3rd party software to work.. so not ideal. If there are any other methods you can suggest I'd be happy to hear them... I need your help guys. I'm also in the market for a PCI express SATA controller, nothing flashy, just need up to 8 ports, JBOD and 100% compatability.. Any suggestions? Regards Kevin

    Read the article

  • Is 2 GB of RAM better than 2.5 GB?

    - by pibboater
    My laptop has two slots for RAM, and currently has two 512 MB chips, for 1 GB. Windows XP is running terribly slow on it, so I want to upgrade the RAM. I could buy two 1 GB chips to replace both of the current 512 MB chips, to give me 2 GB of RAM. Or, the price is the same to buy one 2 GB chip, to replace just one of the 512 MB chips, and give me 2.5 GB total. The RAM it takes is PC2-4200 533MHz DDR2. What do you think would be better: buying two 1 GB chips so it can take advantage of dual-channel operation, or buying one 2 GB chip to end up with more total RAM but not dual-channel operation? Like I said, price is the same, so performance is the only consideration. I'm not doing anything especially intensive like video or photo editing -- just having multiple Office programs open, playing music, browsers, etc., but currently even opening the first application takes forever. If it matters, the laptop is a Toshiba Qosmio G25-AV513 running Windows XP Media Center SP3. Thanks! Kevin

    Read the article

  • Is 1GB + 1GB RAM better than 2GB +0.5GB?

    - by pibboater
    My laptop has two slots for RAM, and currently has two 512 MB chips, for 1 GB. Windows XP is running terribly slow on it, so I want to upgrade the RAM. I could buy two 1 GB chips to replace both of the current 512 MB chips, to give me 2 GB of RAM. Or, the price is the same to buy one 2 GB chip, to replace just one of the 512 MB chips, and give me 2.5 GB total. The RAM it takes is PC2-4200 533MHz DDR2. What do you think would be better: buying two 1 GB chips so it can take advantage of dual-channel operation, or buying one 2 GB chip to end up with more total RAM but not dual-channel operation? Like I said, price is the same, so performance is the only consideration. I'm not doing anything especially intensive like video or photo editing -- just having multiple Office programs open, playing music, browsers, etc., but currently even opening the first application takes forever. If it matters, the laptop is a Toshiba Qosmio G25-AV513 running Windows XP Media Center SP3. Thanks! Kevin

    Read the article

  • How do I get .NET to garbage collect aggressively?

    - by mmr
    I have an application that is used in image processing, and I find myself typically allocating arrays in the 4000x4000 ushort size, as well as the occasional float and the like. Currently, the .NET framework tends to crash in this app apparently randomly, almost always with an out of memory error. 32mb is not a huge declaration, but if .NET is fragmenting memory, then it's very possible that such large continuous allocations aren't behaving as expected. Is there a way to tell the garbage collector to be more aggressive, or to defrag memory (if that's the problem)? I realize that there's the GC.Collect and GC.WaitForPendingFinalizers calls, and I've sprinkled them pretty liberally through my code, but I'm still getting the errors. It may be because I'm calling dll routines that use native code a lot, but I'm not sure. I've gone over that C++ code, and make sure that any memory I declare I delete, but still I get these C# crashes, so I'm pretty sure it's not there. I wonder if the C++ calls could be interfering with the GC, making it leave behind memory because it once interacted with a native call-- is that possible? If so, can I turn that functionality off? EDIT: Here is some very specific code that will cause the crash. According to this SO question, I do not need to be disposing of the BitmapSource objects here. Here is the naive version, no GC.Collects in it. It generally crashes on iteration 4 to 10 of the undo procedure. This code replaces the constructor in a blank WPF project, since I'm using WPF. I do the wackiness with the bitmapsource because of the limitations I explained in my answer to @dthorpe below as well as the requirements listed in this SO question. public partial class Window1 : Window { public Window1() { InitializeComponent(); //Attempts to create an OOM crash //to do so, mimic minute croppings of an 'image' (ushort array), and then undoing the crops int theRows = 4000, currRows; int theColumns = 4000, currCols; int theMaxChange = 30; int i; List<ushort[]> theList = new List<ushort[]>();//the list of images in the undo/redo stack byte[] displayBuffer = null;//the buffer used as a bitmap source BitmapSource theSource = null; for (i = 0; i < theMaxChange; i++) { currRows = theRows - i; currCols = theColumns - i; theList.Add(new ushort[(theRows - i) * (theColumns - i)]); displayBuffer = new byte[theList[i].Length]; theSource = BitmapSource.Create(currCols, currRows, 96, 96, PixelFormats.Gray8, null, displayBuffer, (currCols * PixelFormats.Gray8.BitsPerPixel + 7) / 8); System.Console.WriteLine("Got to change " + i.ToString()); System.Threading.Thread.Sleep(100); } //should get here. If not, then theMaxChange is too large. //Now, go back up the undo stack. for (i = theMaxChange - 1; i >= 0; i--) { displayBuffer = new byte[theList[i].Length]; theSource = BitmapSource.Create((theColumns - i), (theRows - i), 96, 96, PixelFormats.Gray8, null, displayBuffer, ((theColumns - i) * PixelFormats.Gray8.BitsPerPixel + 7) / 8); System.Console.WriteLine("Got to undo change " + i.ToString()); System.Threading.Thread.Sleep(100); } } } Now, if I'm explicit in calling the garbage collector, I have to wrap the entire code in an outer loop to cause the OOM crash. For me, this tends to happen around x = 50 or so: public partial class Window1 : Window { public Window1() { InitializeComponent(); //Attempts to create an OOM crash //to do so, mimic minute croppings of an 'image' (ushort array), and then undoing the crops for (int x = 0; x < 1000; x++){ int theRows = 4000, currRows; int theColumns = 4000, currCols; int theMaxChange = 30; int i; List<ushort[]> theList = new List<ushort[]>();//the list of images in the undo/redo stack byte[] displayBuffer = null;//the buffer used as a bitmap source BitmapSource theSource = null; for (i = 0; i < theMaxChange; i++) { currRows = theRows - i; currCols = theColumns - i; theList.Add(new ushort[(theRows - i) * (theColumns - i)]); displayBuffer = new byte[theList[i].Length]; theSource = BitmapSource.Create(currCols, currRows, 96, 96, PixelFormats.Gray8, null, displayBuffer, (currCols * PixelFormats.Gray8.BitsPerPixel + 7) / 8); } //should get here. If not, then theMaxChange is too large. //Now, go back up the undo stack. for (i = theMaxChange - 1; i >= 0; i--) { displayBuffer = new byte[theList[i].Length]; theSource = BitmapSource.Create((theColumns - i), (theRows - i), 96, 96, PixelFormats.Gray8, null, displayBuffer, ((theColumns - i) * PixelFormats.Gray8.BitsPerPixel + 7) / 8); GC.WaitForPendingFinalizers();//force gc to collect, because we're in scenario 2, lots of large random changes GC.Collect(); } System.Console.WriteLine("Got to changelist " + x.ToString()); System.Threading.Thread.Sleep(100); } } } If I'm mishandling memory in either scenario, if there's something I should spot with a profiler, let me know. That's a pretty simple routine there. Unfortunately, it looks like @Kevin's answer is right-- this is a bug in .NET and how .NET handles objects larger than 85k. This situation strikes me as exceedingly strange; could Powerpoint be rewritten in .NET with this kind of limitation, or any of the other Office suite applications? 85k does not seem to me to be a whole lot of space, and I'd also think that any program that uses so-called 'large' allocations frequently would become unstable within a matter of days to weeks when using .NET. EDIT: It looks like Kevin is right, this is a limitation of .NET's GC. For those who don't want to follow the entire thread, .NET has four GC heaps: gen0, gen1, gen2, and LOH (Large Object Heap). Everything that's 85k or smaller goes on one of the first three heaps, depending on creation time (moved from gen0 to gen1 to gen2, etc). Objects larger than 85k get placed on the LOH. The LOH is never compacted, so eventually, allocations of the type I'm doing will eventually cause an OOM error as objects get scattered about that memory space. We've found that moving to .NET 4.0 does help the problem somewhat, delaying the exception, but not preventing it. To be honest, this feels a bit like the 640k barrier-- 85k ought to be enough for any user application (to paraphrase this video of a discussion of the GC in .NET). For the record, Java does not exhibit this behavior with its GC.

    Read the article

  • Running Powershell from within SharePoint

    - by Norgean
    Just because something is a daft idea, doesn't mean it can't be done. We sometimes need to do some housekeeping - like delete old files or list items or… yes, well, whatever you use Powershell for in a SharePoint world. Or it could be that your solution has "issues" for which you have Powershell solutions, but not the budget to transform into proper bug fixes. So you create a "how to" for the ITPro guys. Idea: What if we keep the scripts in a list, and have SharePoint execute the scripts on demand? An announcements list (because of the multiline body field). Warning! Let us be clear. This list needs to be locked down; if somebody creates a malicious script and you run it, I cannot help you. First; we need to figure out how to start Powershell scripts from C#. Hit teh interwebs and the Googlie, and you may find jpmik's post: http://www.codeproject.com/Articles/18229/How-to-run-PowerShell-scripts-from-C. (Or MS' official answer at http://msdn.microsoft.com/en-us/library/ee706563(v=vs.85).aspx) public string RunPowershell(string powershellText, SPWeb web, string param1, string param2) { // Powershell ~= RunspaceFactory - i.e. Create a powershell context var runspace = RunspaceFactory.CreateRunspace(); var resultString = new StringBuilder(); try { // load the SharePoint snapin - Note: you cannot do this in the script itself (i.e. add-pssnapin etc does not work) PSSnapInException snapInError; runspace.RunspaceConfiguration.AddPSSnapIn("Microsoft.SharePoint.PowerShell", out snapInError); runspace.Open(); // set a web variable. runspace.SessionStateProxy.SetVariable("webContext", web); // and some user defined parameters runspace.SessionStateProxy.SetVariable("param1", param1); runspace.SessionStateProxy.SetVariable("param2", param2); var pipeline = runspace.CreatePipeline(); pipeline.Commands.AddScript(powershellText); // add a "return" variable pipeline.Commands.Add("Out-String"); // execute! var results = pipeline.Invoke(); // convert the script result into a single string foreach (PSObject obj in results) { resultString.AppendLine(obj.ToString()); } } finally { // close the runspace runspace.Close(); } // consider logging the result. Or something. return resultString.ToString(); } Ok. We've written some code. Let us test it. var runner = new PowershellRunner(); runner.RunPowershellScript(@" $web = Get-SPWeb 'http://server/web' # or $webContext $web.Title = $param1 $web.Update() $web.Dispose() ", null, "New title", "not used"); Next step: Connect the code to the list, or more specifically, have the code execute on one (or several) list items. As there are more options than readers, I'll leave this as an exercise for the reader. Some alternatives: Create a ribbon button that calls RunPowershell with the body of the selected itemsAdd a layout pageSpecify list item from query string (possibly coupled with content editor webpart with html that links directly to this page with querystring)WebpartListing with an "execute" columnList with multiselect and an execute button Etc!Now that you have the code for executing powershell scripts, you can easily expand this into a timer job, which executes scripts at regular intervals. But if the previous solution was dangerous, this is even worse - the scripts will usually be run with one of the admin accounts, and can do pretty much anything...One more thing... Note that as this is running "consoleless" calls to Write-Host will fail. Two solutions; remove all output, or check if the script is run in a console-window or not.  if ($host.Name -eq "ConsoleHost") { Write-Host 'If I agreed with you we'd both be wrong' }

    Read the article

  • Using a Case statement within the values section of an Insert statement

    - by mattgcon
    Please forgive my ignorance and poor SQL programming skills but I am normally a basic SQL developer. I need to create a trigger off the insertion of data in one table to insert different data into another table. Within this trigger I need to insert certain data into the new table based upon values within the newly inserted data from the original table. I am totally confused on this. i thought I would be creative and use a case statement within teh Values section but it is not working. Can anyone please help me on this? (below is the code for the trigger that I have as of now) INSERT INTO dbo.WebOnlineUserPeopleDashboard ( ONLINE_USERACCOUNT_ID, ONLINE_ROOMS_DIRECTORY, ONLINE_ROOMS_LIST, ONLINE_ROOMS_PLACEMENT, ONLINE_ROOMS_MANAGEMENT, ONLINE_MAILINGLIST_DIRECTORY, ONLINE_MAILINGLIST_LIST, ONLINE_MAILINGLIST_MEMBERS, ONLINE_MAILINGLIST_MANAGER, ONLINE_PEOPLESEARCH_DIRECTORY ) VALUES IF (SELECT ONLINE_PEOPLE_FULL_ACCESS FROM INSERTED) = 1 BEGIN SELECT ONLINE_USERACCOUNT_ID, 1, 1, 1, 1, 1, 1, 1, 1, 1 FROM INSERTED END ELSE IF (SELECT ONLINE_PEOPLE_FULL_ACCESS FROM INSERTED) = 0 BEGIN SELECT ONLINE_USERACCOUNT_ID, 0, 0, 0, 0, 0, 0, 0, 0, 0 FROM INSERTED END ELSE BEGIN SELECT ONLINE_USERACCOUNT_ID, CASE --DIRECTORY WHEN ONLINE_PEOPLE_ROOMS_PLACEMENT_FULL_ACCESS = 1 OR ONLINE_PEOPLE_ROOMS_PLACEMENT_VIEW = 1 OR ONLINE_PEOPLE_ROOMS_PLACEMENT_ADD = 1 OR ONLINE_PEOPLE_ROOMS_PLACEMENT_UPDATE = 1 OR ONLINE_PEOPLE_ROOMS_PLACEMENT_DELETE = 1 THEN 1 WHEN ONLINE_PEOPLE_ROOMS_PLACEMENT_FULL_ACCESS = 0 THEN 0 END, CASE WHEN ONLINE_PEOPLE_ROOMS_PLACEMENT_VIEW = 1 THEN 1 WHEN ONLINE_PEOPLE_ROOMS_PLACEMENT_VIEW = 0 THEN 0 END, CASE WHEN ONLINE_PEOPLE_ROOMS_PLACEMENT_ADD = 1 OR ONLINE_PEOPLE_ROOMS_PLACEMENT_UPDATE = 1 OR ONLINE_PEOPLE_ROOMS_PLACEMENT_DELETE = 1 THEN 1 WHEN ONLINE_PEOPLE_ROOMS_PLACEMENT_ADD = 0 AND ONLINE_PEOPLE_ROOMS_PLACEMENT_UPDATE = 0 AND ONLINE_PEOPLE_ROOMS_PLACEMENT_DELETE = 0 THEN 0 END, CASE WHEN ONLINE_PEOPLE_ROOMS_MANAGEMENT_FULL_ACCESS = 1 THEN 1 WHEN ONLINE_PEOPLE_ROOMS_MANAGEMENT_FULL_ACCESS = 0 THEN 0 END, CASE WHEN ONLINE_PEOPLE_MAILING_LISTS_FULL_ACCESS = 1 OR ONLINE_PEOPLE_MAILING_LISTS_VIEW = 1 OR ONLINE_PEOPLE_MAILING_LISTS_MEMBERS_ADD = 1 OR ONLINE_PEOPLE_MAILING_LISTS_MEMBERS_UPDATE = 1 OR ONLINE_PEOPLE_MAILING_LISTS_MEMBERS_DELETE = 1 THEN 1 WHEN ONLINE_PEOPLE_MAILING_LISTS_FULL_ACCESS = 0 THEN 0 END, CASE WHEN ONLINE_PEOPLE_MAILING_LISTS_VIEW = 1 THEN 1 WHEN ONLINE_PEOPLE_MAILING_LISTS_VIEW = 0 THEN 0 END, CASE WHEN ONLINE_PEOPLE_MAILING_LISTS_MEMBERS_ADD = 1 OR ONLINE_PEOPLE_MAILING_LISTS_MEMBERS_UPDATE = 1 OR ONLINE_PEOPLE_MAILING_LISTS_MEMBERS_DELETE = 1 THEN 1 WHEN ONLINE_PEOPLE_MAILING_LISTS_MEMBERS_ADD = 0 AND ONLINE_PEOPLE_MAILING_LISTS_MEMBERS_UPDATE = 0 AND ONLINE_PEOPLE_MAILING_LISTS_MEMBERS_DELETE = 0 THEN 0 END, CASE WHEN ONLINE_PEOPLE_MAILING_LISTS_ADD = 1 OR ONLINE_PEOPLE_MAILING_LISTS_UPDATE = 1 OR ONLINE_PEOPLE_MAILING_LISTS_DELETE = 1 THEN 1 WHEN ONLINE_PEOPLE_MAILING_LISTS_ADD = 1 OR ONLINE_PEOPLE_MAILING_LISTS_UPDATE = 1 OR ONLINE_PEOPLE_MAILING_LISTS_DELETE = 1 THEN 0 END, CASE WHEN ONLINE_PEOPLE_PEOPLE_SEARCH = 1 THEN 1 WHEN ONLINE_PEOPLE_PEOPLE_SEARCH = 0 THEN 0 END FROM INSERTED END END

    Read the article

  • Friend Assemblies in C#

    - by Tim Long
    I'm trying to create some 'friend assemblies' using the [InternalsVisibleTo()] attribute, but I can't seem to get it working. I've followed Microsoft's instructions for creating signed friend assemblies and I can't see where I'm going wrong. So I'll detail my steps here and hopefully someone can spot my deliberate mistake...? Create a strong name key and extract the public key, thus: sn -k StrongNameKey sn -p public.pk sn -tp public.pk Add the strong name key to each project and enable signing. Create a project called Internals and a class with an internal property: namespace Internals { internal class ClassWithInternals { internal string Message { get; set; } public ClassWithInternals(string m) { Message = m; } } } Create another project called TestInternalsVisibleTo: namespace TestInternalsVisibleTo { static class Program { /// <summary> /// The main entry point for the application. /// </summary> [STAThread] static void Main() { var c = new Internals.ClassWithInternals("Test"); Console.WriteLine(c.Message); } } } Edit the AssemblyInfo.cs file for the Internals project, and add teh necessary attribute: [assembly: AssemblyTitle("AssemblyWithInternals")] [assembly: AssemblyDescription("")] [assembly: AssemblyConfiguration("")] [assembly: AssemblyCompany("Microsoft")] [assembly: AssemblyProduct("Internals")] [assembly: AssemblyCopyright("Copyright © Microsoft 2010")] [assembly: AssemblyTrademark("")] [assembly: AssemblyCulture("")] [assembly: ComVisible(false)] [assembly: Guid("41c590dc-f555-48bc-8a94-10c0e7adfd9b")] [assembly: AssemblyVersion("1.0.0.0")] [assembly: AssemblyFileVersion("1.0.0.0")] [assembly: InternalsVisibleTo("TestInternalsVisibleTo PublicKey=002400000480000094000000060200000024000052534131000400000100010087953126637ab27cb375fa917c35b23502c2994bb860cc2582d39912b73740d6b56912c169e4a702bedb471a859a33acbc8b79e1f103667e5075ad17dffda58988ceaf764613bd56fc8f909f43a1b177172bc4143c96cf987274873626abb650550977dcad1bb9bfa255056bb8d0a2ec5d35d6f8cb0a6065ec0639550c2334b9")] And finally... build! I get the following errors: error CS0122: 'Internals.ClassWithInternals' is inaccessible due to its protection level error CS1729: 'Internals.ClassWithInternals' does not contain a constructor that takes 1 arguments error CS1061: 'Internals.ClassWithInternals' does not contain a definition for 'Message' and no extension method 'Message' accepting a first argument of type 'Internals.ClassWithInternals' could be found (are you missing a using directive or an assembly reference?) Basically, it's as if I had not used the InternalsVisibleTo attrbute. Now, I'm not going to fall into the trap of blaming the tools, so what's up here? Anyone?

    Read the article

  • jQuery not and classes

    - by Giles B
    Hi Guys, I have 2 anchor links (a.selector) and when one is clicked it has a class applied to it of 'active-arrow' and the click also removes the class of the same name from the other anchor as well as lowering the opacity to 0.2. I then want to have a fade effect when the user hovers over the anchor that doesn't have 'active-arrow' applied to it so that it goes to full opacity when mouseenters and back to 0.2 when mouseleaves. The problem im having is that both .not and :not don't seem to be working as expected, the hover effect works but if I click on the anchor whilst hovering the 'active-arrow' class is applied but when mouseleaves the opacity is faded down to 0.2 again even though the 'active-arrow' class is applied. Also the hover then doesn't work for the other a link which has had 'active-arrow' removed. Bit of a hard one to explain so heres some code that hopefully helps a bit. *//If a.selector doesn't have the class 'active-arrow' then run the hoverFade function* $("a.selector").not(".active-arrow").hoverFade(); //Functions for first element $('a.selector-1').click(function () { $('a.selector-2').removeClass('active-arrow'); //Remove background image from corresponding element $('ul#storage-items-2').fadeOut(1200).addClass('hide'); //Fade out then hide corresponding list $(this).addClass('active-arrow', 'fast'); //Add background image to current element $('ul#storage-items-1').removeClass('hide').fadeIn(1800); //Unhide and fade in the list $('a.selector-2').fadeTo(500, 0.2); //Fade corresponding element $(this).fadeTo(800, 1);//Fade this element to full opacity }); I only included the code for teh first anchor (a.selector-1) as the code for the second anchor is identical but just changes the class names to a.selector-2. Also the hoverFade function is in a seperate file so we can re-use it. jQuery.fn.hoverFade = function() { return this.each(function(){ $(this).hover( function () { $(this).fadeTo(500, 0.8); }, function () { $(this).fadeTo(500, 0.2); }); }); } Each anchor link fades in and fades out a UL as well. Any help is most appreciated Thanks Giles

    Read the article

  • WebBrowser control won't display an https site that IE8 on the same PC will

    - by Velika
    In IE8, I get the follow warning, but if I choose to continue the site displays properly. There is a problem with this website's security certificate. The security certificate presented by this website was issued for a different website's address. Security certificate problems may indicate an attempt to fool you or intercept any data you send to the server. We recommend that you close this webpage and do not continue to this website. Click here to close this webpage. Continue to this website (not recommended). More information In the WebBrowser control, I get this at first: Navigation to the webpage was canceled What you can try: Refresh the page. When I hit the refresh teh page, this time, I get the same wanting as I originally get in IE8, but when I click "Continue to this website (not recommended)", the page refresh again, displaying the same warning. What can I do to get the site to display in the WebBrowser control as it does in IE8. I would've thought that the control would be using the same core logic and therefore expected the same result.

    Read the article

  • Strange VS2005 compile errors: unable to locate resource file (because the compiler keeps deleting i

    - by Velika
    I AM GETTING THE FOLLOWING ERROR IN A VERY SIMPLE CLASS LIBRARY: Error 1 Unable to copy file "obj\Debug\SMIT.SysAdmin.BusinessLayer.Resources.resources" to "obj\Debug\SMIT.SysAdmin.BusinessLayer.SMIT.SysAdmin.BusinessLayer.Resources.resources". Could not find file 'obj\Debug\SMIT.SysAdmin.BusinessLayer.Resources.resources'. SMIT.SysAdmin.BusinessLayer Going to the Project Properties-Resource tab, I see that I defined do resources. Still, I tried to delete the resource file and recreate by going to the resource tab. When I recompile, I still get the same error. Why is it even looking for a resource file? I define no resources on teh project properties tab and added no new resource file items. Any suggestions of things to try? Update: I found the missing file in an old backup. I copied it to the location where the compiler expected it, and then successfully recompiled the project that previously had compile time errors. However, when I rebuild the entire solution, it deletes the file that I previously restored and I'm back to where I started. My solution contains several projects (maybe 10 or so). Could VS 2005 be having a problem determining dependencies and the proper order to compile these projects?

    Read the article

  • Table design issues - should I create separate fields or store as a blob

    - by Ali
    Hi guys I'm working on my web based ordering system and we would like to maintain a kind of task history for each of our orders. A hsitory in the sense that we would like to maintain a log of who did what on an order like lets say an order has been entered - we would like to know if the order was acknowledged for an example. Or lets say somebody followed up on the order - etc. Consider that there are numerous situations like this for each order would it be wise to create a schema on the lines of: Orders ID - title - description - date - is_ack - is_follow - ack_by ..... That accounts to a lot of fields - on teh other hand I could have one LongText field called 'history' and fill it with a serialised object holding all the information. However in the latter case I can't run a query to lets say retrieve all orders that have not been acknowledged and stuff like that. With time requirements woudl change and I would be required to modify it to allow for more detailed tracking and that is why I need to set up a way which would be feasible to scale upon yet I don't want to be restricted on the SQL side too much.

    Read the article

  • rails search nested set (categories and sub categories)

    - by bob
    Hello, I am using the http://github.com/collectiveidea/awesome_nested_set awesome nested set plugin and currently, if I choose a sub category as my category_id for an item, I can not search by its parent. Category.parent Category.Child I choose Category.child as the category that my item is in. So now my item has category_id of 4 stored in it. If I go to a page in my rails application, lets say teh Category page and I am on the Category.parent's page, I want to show products that have category_id's of all the descendants as well. So ideally i want to have a find method that can take into account the descendants. You can get the descendants of a root by calling root.descendants (a built in plugin method). How would I go about making it so I can query a find that gets the descendants of a root instead of what its doing now which is binging up nothing unless the product had a specific category_id of the Category.parent. I hope I am being clear here. I either need to figure out a way to create a find method or named_scope that can query and return an array of objects that have id's corresponding tot he descendants of a root OR if I have any other options, what are they? I thought about creating a field in my products table like parent_id which can keep track of the parent so i can then create two named scopes one finding the parent stuff and one finding the child stuff and chaining them. I know I can create a named scope for each child and chain them together for multiple children but this seems a very tedious process and also, if you add more children, you would need to specify more named scopes.

    Read the article

  • Crystal Reports .Net Guidance

    - by Ken Ray
    We have been using .Net and Visual Studio for the last six years, and early on developed a number of web based reporting applications using the .Net version of Crystal Reports that was bundled with Visual Studio. My overall opinion of that product has been, to say the least, rather unimpressed. It seemed to be incredibly difficult and convoluted to use, we had to make security changes, install various extra software, and so on. Now, we are moving to VS2008 and version 3.5 of the .Net framework, and the time has come to redevelop some of these old applications. The developers who used (and somehow mastered) Crystal .Net have long gone, and I am facing a decision - do we stick with Crystal Reports or move to something else. We also have the "full" version of Crystal Reports XI at our disposal. The way we use the product is to product pdf versions of data extracted from various databases. While some apps use the inbuilt Crystal Reports viewer as well, this seems to be redundant now with the flexibility of grid views - but there is still the need to produce a pdf version of the data in teh grid for printing, or in Excel format to download. What is the concensus? Is Crystal Reports .Net worth persisting with, or should we work out how to use version XI? Alternatively, is there a simple and low cost way to generate pdf reports without using Crystal? What good sources of "how to" information have others found and recommend? Are there suitable books, designed for VS2008 / .Net 3.5 development that you have used and found of benefit? Thanks in advance.

    Read the article

  • Powershell / .Net: Get a reference to an object returned by a method

    - by Dan Menes
    I am teaching myself PowerShell by writing a simple parser. I use the .Net framework class Collections.Stack. I want to modify the object at the top of the stack in place. I know I can pop() the object off, modify it, and then push() it back on, but that strikes me as inelegant. First, I tried this: $stk = new-object Collections.Stack $stk.push( (,'My first value') ) ( $stk.peek() ) += ,'| My second value' Which threw an error: Assignment failed because [System.Collections.Stack] doesn't contain a settable property 'peek()'. At C:\Development\StackOverflow\PowerShell-Stacks\test.ps1:3 char:12 + ( $stk.peek <<<< () ) += ,'| My second value' + CategoryInfo : InvalidOperation: (peek:String) [], RuntimeException + FullyQualifiedErrorId : ParameterizedPropertyAssignmentFailed Next I tried this: $ary = $stk.peek() $ary += ,'| My second value' write-host "Array is: $ary" write-host "Stack top is: $($stk.peek())" Which prevented the error but still didn't do the right thing: Array is: My first value | My second value Stack top is: My first value Clearly, what is getting assigned to $ary is a copy of the object at the top of the stack, so when I the object in $ary, the object at the top of the stack remains unchanged. Finally, I read up on teh [ref] type, and tried this: $ary_ref = [ref]$stk.peek() $ary_ref.value += ,'| My second value' write-host "Referenced array is: $($ary_ref.value)" write-host "Stack top is still: $($stk.peek())" But still no dice: Referenced array is: My first value | My second value Stack top is still: My first value I assume the peek() method returns a reference to the actual object, not the clone. If so, then the reference appears to be being replaced by a clone by PowerShell's expression processing logic. Can somebody tell me if there is a way to do what I want to do? Or do I have to revert to pop() / modify / push()?

    Read the article

  • Google Contacts Error: If-Match or If-None-Match header or entry etag attribute required

    - by Ali
    Hi guys I'm following the example code as defined on this website: http://www.ibm.com/developerworks/opensource/library/x-phpgooglecontact/index.html I want to be able to integrate with my google apps account and play around with teh contacts. However I get this error whenever I try to run the modify contacts code: If-Match or If-None-Match header or entry etag attribute required This is my code: Zend_Loader::loadClass('Zend_Gdata_ClientLogin'); Zend_Loader::loadClass('Zend_Http_Client'); Zend_Loader::loadClass('Zend_Gdata_Query'); Zend_Loader::loadClass('Zend_Gdata_Feed'); $client = getGoogleClient('cp'); // this is a function I made - its working fine $client->setHeaders('If-Match: *'); $gdata = new Zend_Gdata($client); $gdata->setMajorProtocolVersion(3); $query = new Zend_Gdata_Query($id);// id is the google reference $entry = $gdata->getEntry($query); $xml = simplexml_load_string($entry->getXML()); $xml->name->fullName = trim($contact->first_name).' '.trim($contact->last_name); $entryResult = $gdata->updateEntry($xml->saveXML(), $id);

    Read the article

  • Sql serve Full Text Search with Containstable is very slow when Used in JOIN!

    - by Bob
    Hello, I am using sql 2008 full text search and I am having serious issues with performance depending on how I use Contains or ContainsTable. Here are sample: (table one has about 5000 records and there is a covered index on table1 which has all the fields in the where clause. I tried to simplify the statements so forgive me if there is syntax issues.) Scenario 1: select * from table1 as t1 where t1.field1=90 and t1.field2='something' and Exists(select top 1 * from containstable(table1,*, 'something') as t2 where t2.[key]=t1.id) results: 10 second (very slow) Scenario 2: select * from table1 as t1 join containstable(table1,*, 'something') as t2 on t2.[key] = t1.id where t1.field1=90 and t1.field2='something' results: 10 second (very slow) Scenario 3: Declare @tbl Table(id uniqueidentifier primary key) insert into @tbl select {key] from containstable(table1,*, 'something') select * from table1 as t1 where t1.field1=90 and t1.field2='something' and Exists(select id from @tbl as tbl where id=req1.id) results: fraction of a second (super fast) Bottom line, it seems if I use Containstable in any kind of join or where clause condition of a select statement that also has other conditions, the performance is really bad. In addition if you look at profiler, the number of reads from the database goes to the roof. But if I first do the full text search and put results in a table variable and use that variable everything goes super fast. The number of reads are also much lower. It seems in "bad" scenarios, somehow it gets stuck in a loop which causes it to read many times from teh database but of course I don't understant why. Now the question is first of all whyis that happening? and question two is that how scalable table variables are? what if it results to 10s of thousands of records? is it still going to be fast. Any ideas? Thanks

    Read the article

  • Dojo DataGrid, programmatic creation

    - by djna
    I'm trying to create a DataGrid dynamically. Declarative creation is working fine: A source of data: <span dojoType="dojo.data.ItemFileReadStore" jsId="theStore" url="dataonly001.json"> </span> a simple layout: <script type="text/javascript" > var layout = [ { field: 'custId', name: 'Id', } // more items elided ... ]; </script> The grid: <body class="tundra" id="mainbody"> <div id="grid" dojoType="dojox.grid.DataGrid" store="theStore" structure="layout" autoWidth="true" ></div> </body> And I get the grid displayed nicely. Instead, I want to create the grid dynamically. For the sake of figuring out what's broken I've tried to use exactly the same layout and store, removing just teh grid declaration and adding this script: <script type="text/javascript" djConfig="parseOnLoad: true, debugAtAllCosts: true"> dojo.addOnLoad(function(){ // initialise and lay out home page console.log("Have a store:" + theStore); console.log("Have a structure:" + layout); var grid = new dojox.grid.DataGrid({ id:"grid", store : theStore, clientSort : "true", structure : layout }); grid.startup(); dojo.place(grid.domNode, dojo.body(), "first"); }); </script> The effect that I get is a completely empty rectangle is displayed. Using FireBug I can see that the DataGrid widget is created but that it has no rows or columns. So my guess is that the datastore or layout are not being passed correctly. However, it does appear that at the point of creation the values of theStore and layout are correct. Suggestions please, or indeed a working example of a programmic grid might solve the problem.

    Read the article

  • Empty data problem - data layer or DAL?

    - by luckyluke
    I designing the new App now and giving the following question a lot of thought. I consume a lot of data from the warehouse, and the entities have a lot of dictionary based values (currency, country, tax-whatever data) - dimensions. I cannot be assured though that there won't be nulls. So I am thinking: create an empty value in each of teh dictionaries with special keyID - ie. -1 do the ETL (ssis) do the correct stuff and insert -1 where it needs to let the DAL know that -1 is special (Static const whatever thing) don't care in the code to check for nullness of dictionary entries because THEY will always have a value But maybe I should be thinking: import data AS IS let the DAL do the thinking using empty record Pattern still don't care in the code because business layer will have what it needs from DAL. I think is more of a approach thing but maybe i am missing something important here... What do You think? Am i clear? Please don't confuse it with empty record problem. I do use emptyCustomer think all the time and other defaults too.

    Read the article

  • Scripting for JAVA - a problem to solve

    - by Parhs
    Hello.. I had no luck to find a good way to achieve the following: suppose i let the user to write a condition using javascript INS5 || ASTO.valueBetween(10,210) ... etc.... I tried to find a way to get the variables name...from JAVA... Rhino library didnt help a lot... However i found that handling exceptions i could get all the identifiers! Everything is great but one little problem..... How teh heck i can replace these identifiers with the numeric id of each var ? eg INS to be _234 ASTO to be _331 ? I want to to this replace because the name may change... I could do it using a replace but i know that this isnt easy because... 1) replacing _23 with MPLAH may replace also _234...(this could be fixed with regexp somehow..) 2)what if _23 is in a comment section ? rare to happen but possible /* _23 fdsafd ktl */ it should be replaced... 3)what if is a name of a function ??? _32() {} rare but shouldnt be replaced 4) what if it is enclosed in "" or ''??? And i am sure that there should me a lot more cases..... any ideas ? thank yoyu for your time

    Read the article

  • How to get HTTP status code in HTTPService fault handler

    - by Ankur
    I am calling a server method through HTTPService from client side. The server is a RestFul web service and it might respond with one of many HTTP error codes (say, 400 for one error, 404 for another and 409 for yet another). I have been trying to find out the way to determine what was the exact error code sent by the server. I have walked teh entire object tree for the FaultEvent populated in my fault handler, but no where does it tell me the error code. Is this missing functionality in Flex? My code looks like this: The HTTP Service declaration: <mx:HTTPService id="myServerCall" url="myService" method="GET" resultFormat="e4x" result="myServerCallCallBack(event)" fault="faultHandler(event)"> <mx:request> <action>myServerCall</action> <docId>{m_sDocId}</docId> </mx:request> </mx:HTTPService> My fault handler code is like so: private function faultHandler(event : FaultEvent):void { Alert.show(event.statusCode.toString() + " / " + event.fault.message.toString()); }

    Read the article

  • How do you clear a CustomValidator Error on a Button click event?

    - by George
    I have a composite User control for entering dates: The CustomValidator will include server sided validation code. I would like the error message to be cleared via client sided script if the user alters teh date value in any way. To do this, I included the following code to hook up the two drop downs and the year text box to the validation control: <script type="text/javascript"> ValidatorHookupControlID("<%= ddlMonth.ClientID%>", document.getElementById("<%= CustomValidator1.ClientID%>")); ValidatorHookupControlID("<%= ddlDate.ClientID%>", document.getElementById("<%= CustomValidator1.ClientID%>")); ValidatorHookupControlID("<%= txtYear.ClientID%>", document.getElementById("<%= CustomValidator1.ClientID%>")); </script> However, I would also like the Validation error to be cleared when the user clicks the clear button. When the user clicks the Clear button, the other 3 controls are reset. To avoid a Post back, the Clear button is a regular HTML button with an OnClick event that resets the 3 controls. Unfortunately, the ValidatorHookupControlID method does not seem to work on HTML controls, so I thought to change the HTML Button to an ASP button and to Hookup to that control instead. However, I cannot seem to eliminate the Postback functionality associated by default with the ASP button control. I tried to set the UseSubmitBehavior to False, but it still submits. I tried to return false in my btnClear_OnClick client code, but the code sent to the browser included a DoPostback call after my call. btnClear.Attributes.Add("onClick", "btnClear_OnClick();") Instead of adding OnClick code, I tried overwriting it, but the DoPostBack code was still included in the final code that was sent to the browser. What do I have to do to get the Clear button to clear the CustomValidator error when clicked and avoid a postback? btnClear.Attributes.Item("onClick") = "btnClear_OnClick();"

    Read the article

  • Using Javascript to flip flop a textbox's readonly flag

    - by Velika
    I have a frame with several radio buttons where the user is supposed to select the "Category" that his Occupation falls into and then unconditionally also specify his occupation. If the user selects "Retired", the requirement is to prefill "Retired" in the "Specify Occupation" text box and to disable it to prevent it from being changed. The Specify Occupation text box should also no longer be a tab stop. If the user selects a radio button other than Retired the Specify Occupation text box should be enabled and once again in the normal tab sequence. Originally, I was setting and clearing the disabled property on the Specify occupation textbox, then I found out that, upon submitting the form, disabled fields are excluded from the submit and the REQUIRED validator on the Specify Occupation textbox was being raised because the textbox was being blanked out. What is the best way to solve this? My approach below was to mimic a disabled text box by setting/resetting the readonly attribute on the text box and changing the background color to make it appear disabled. (I suppose I should be changing the forecolor instead of teh background color). Nevertheless, my code to make the textbox readonly and to reset it doesn't appear to be working. function OccupationOnClick(sender) { debugger; var optOccupationRetired = document.getElementById("<%= optOccupationRetired.ClientId %>"); var txtSpecifyOccupation = document.getElementById("<%= txtSpecifyOccupation.ClientId %>"); var optOccupationOther = document.getElementById("<%= optOccupationOther.ClientId %>"); if (sender == optOccupationRetired) { txtSpecifyOccupation.value = "Retired" txtSpecifyOccupation.readonly = "readonly"; txtSpecifyOccupation.style.backgroundColor = "#E0E0E0"; txtSpecifyOccupation.tabIndex = -1; } else { if (txtSpecifyOccupation.value == "Retired") txtSpecifyOccupation.value = ""; txtSpecifyOccupation.style.backgroundColor = "#FFFFFF"; txtSpecifyOccupation.readonly = ""; txtSpecifyOccupation.tabIndex = 0; } } Can someone provide a suggest for the best way to handle this scenario and porovide a tweek to the code above to fix the setting/resetting on the readonly property?

    Read the article

  • Sys.WebForms.PageRequestManagerServerErrorException: .... The status code returned from the server w

    - by webnoob
    Hi All, I have seen a few posts regarding this issue but not one specific to my problem and I have no ideas as to what I need to do to debug this. I have some combo boxes on an aspx pages, when I select a value from the first one, it fills the second with value and so on with the third and fourth. This works with no problems until I wrap an asp.net UpdatePanel around the combo boxes and try to "ajaxify" the whole process so the page isn't dancing around. The exact error I get is: Sys.WebForms.PageRequestManagerServerErrorException: An unknown error occurred while processing the request on the server. The status code returned from the server was: 404 Some things to note: I am using URL rewriting - This is what I think is causing the problem The error will occur whenever I choose a selection for a SECOND time. This means that I could select a value from the first combo box and get the same error (so it is happening on the second postback - No matter which combo box it's from). I have tried setting the EnablePartialRendering="false" on teh scriptmanager but as I said, it works when not using ajax, so I don't know how to debug the issue. My server is Windows 2008 running IIS& with ASP.NET 2.0. I would really appreciate your help Thanks in advance.

    Read the article

< Previous Page | 41 42 43 44 45 46 47 48 49  | Next Page >