Search Results

Search found 343 results on 14 pages for 'redo'.

Page 13/14 | < Previous Page | 9 10 11 12 13 14  | Next Page >

  • .NET Free memory usage (how to prevent overallocation / release memory to the OS)

    - by Ronan Thibaudau
    I'm currently working on a website that makes large use of cached data to avoid roundtrips. At startup we get a "large" graph (hundreds of thouthands of different kinds of objects). Those objects are retrieved over WCF and deserialized (we use protocol buffers for serialization) I'm using redgate's memory profiler to debug memory issues (the memory didn't seem to fit with how much memory we should need "after" we're done initializing and end up with this report Now what we can gather from this report is that: 1) Most of the memory .NET allocated is free (it may have been rightfully allocated during deserialisation, but now that it's free, i'd like for it to return to the OS) 2) Memory is fragmented (which is bad, as everytime i refresh the cash i need to redo the memory hungry deserialisation process and this, in turn creates large object that may throw an OutOfMemoryException due to fragmentation) 3) I have no clue why the space is fragmented, because when i look at the large object heap, there are only 30 instances, 15 object[] are directly attached to the GC and totally unrelated to me, 1 is a char array also attached directly to the GC Heap, the remaining 15 are mine but are not the cause of this as i get the same report if i comment them out in code. So my question is, what can i do to go further with this? I'm not really sure what to look for in debugging / tools as it seems my memory is fragmented, but not by me, and huge amounts of free spaces are allocated by .net , which i can't release. Also please make sure you understand the question well before answering, i'm not looking for a way to free memory within .net (GC.Collect), but to free memory that is already free in .net , to the system as well as to defragment said memory. Note that a slow solution is fine, if it's possible to manually defragment the large heap i'd be all for it as i can call it at the end of RefreshCache and it's ok if it takes 1 or 2 second to run. Thanks for your help! A few notes i forgot: 1) The project is a .net 2.0 website, i get the same results running it in a .net 4 pool, idem if i run it in a .net 4 pool and convert it to .net 4 and recompile. 2) These are results of a release build, so debug build can not be the issue. 3) And this is probably quite important, i do not get these issues at all in the webdev server, only in IIS, in the webdev i get memory consumption rather close to my actual consumption (well more, but not 5-10X more!)

    Read the article

  • What does Apache need to support both mysqli and PDO?

    - by Nathan Long
    I'm considering changing some PHP code to use PDO for database access instead of mysqli (because the PDO syntax makes more sense to me and is database-agnostic). To do that, I'd need both methods to work while I'm making the changeover. My problem is this: so far, either one or the other method will crash Apache. Right now I'm using XAMPP in Windows XP, and PHP Version 5.2.8. Mysqli works fine, and so does this: $dbc = new PDO("mysql:host=$hostname;dbname=$dbname", $username, $password); echo 'Connected to database'; $sql = "SELECT * FROM `employee`"; But this line makes Apache crash: $dbc->query($sql); I don't want to redo my entire Apache or XAMPP installation, but I'd like for PDO to work. So I tried updating libmysql.dll from here, as oddvibes recommended here. That made my simple PDO query work, but then mysqli queries crashed Apache. (I also tried the suggestion after that one, to update php_pdo_mysql.dll and php_pdo.dll, to no effect.) Test Case I created this test script to compare PDO vs mysqli. With the old copy of libmysql.dll, it crashes if $use_pdo is true and doesn't if it's false. With the new copy of libmysql.dll, it's the opposite. if ($use_pdo){ $dbc = new PDO("mysql:host=$hostname;dbname=$dbname", $username, $password); echo 'Connected to database<br />'; $sql = "SELECT * FROM `employee`"; $dbc->query($sql); foreach ($dbc->query($sql) as $row){ echo $row['firstname'] . ' ' . $row['lastname'] . "<br>\n"; } } else { $dbc = @mysqli_connect($hostname, $username, $password, $dbname) OR die('Could not connect to MySQL: ' . mysqli_connect_error()); $sql = "SELECT * FROM `employee`"; $result = @mysqli_query($dbc, $sql) or die(mysqli_error($dbc)); while ($row = mysqli_fetch_array($result,MYSQLI_ASSOC)) { echo $row['firstname'] . ' ' . $row['lastname'] . "<br>\n"; } } What does Apache need in order to support both methods of database query?

    Read the article

  • Can I mix declarative and programmatic layout in GWT 2.0?

    - by stuff22
    I'm trying to redo an existing panel that I made before GWT 2.0 was released. The panel has a few text fields and a scrollable panel below in a VerticalPanel. What I'd like to do is to make the scrollable panel with UIBinder and then add that to a VerticalPanel Below is an example I created to illustrate this: public class ScrollTablePanel extends ResizeComposite{ interface Binder extends UiBinder<Widget, ScrollTablePanel > { } private static Binder uiBinder = GWT.create(Binder.class); @UiField FlexTable table1; @UiField FlexTable table2; public Test2() { initWidget(uiBinder.createAndBindUi(this)); table1.setText(0, 0, "testing 1"); table1.setText(0, 1, "testing 2"); table1.setText(0, 2, "testing 3"); table2.setText(0, 0, "testing 1"); table2.setText(0, 1, "testing 2"); table2.setText(0, 2, "testing 3"); table2.setText(1, 0, "testing 4"); table2.setText(1, 1, "testing 5"); table2.setText(1, 2, "testing 6"); } } then the xml: <ui:UiBinder xmlns:ui='urn:ui:com.google.gwt.uibinder' xmlns:g='urn:import:com.google.gwt.user.client.ui' xmlns:mail='urn:import:com.test.scrollpaneltest'> <g:DockLayoutPanel unit='EM'> <g:north size="2"> <g:FlexTable ui:field="table1"></g:FlexTable> </g:north> <g:center> <g:ScrollPanel> <g:FlexTable ui:field="table2"></g:FlexTable> </g:ScrollPanel> </g:center> </g:DockLayoutPanel> </ui:UiBinder> Then do something like this in the EntryPoint: public void onModuleLoad() { VerticalPanel vp = new VerticalPanel(); vp.add(new ScrollTablePanel()); vp.add(new Label("dummy label text")); vp.setWidth("100%"); RootLayoutPanel.get().add(vp); } But when I add the ScrollTablePanel to the VerticalPanel, only the first FlexTable (test1) is visible on the page, not the whole ScrollTablePanel. Is there a way to make this work where it is possible to mix declarative and programmatic layout in GWT 2.0?

    Read the article

  • How to refresh xmlDataProvider when xml document changes at runtime in WPF?

    - by Kajsa
    I am trying to make a image viewer/album creator in visual studio, wpf. The image paths for each album is stored in an xml document which i bind to to show the images from each album in a listbox. The problem is when i add a image or an album at runtime and write it to the xml document. I can't seem to make the bindings to the xml document update so they show the new images and albums aswell. Calling Refresh() on the XmlDataProvider doesn't change anything. I don't wish to redo the binding of the XmlDataProvider, just make it read from the same source again. XAML: ... <Grid.DataContext> <XmlDataProvider x:Name="Images" Source="Data/images.xml" XPath="/albums/album[@name='no album']/image" /> </Grid.DataContext> ... <Label Grid.Row="1" Grid.Column="0" HorizontalAlignment="Right" VerticalAlignment="Bottom" Padding="0" Margin="0,0,0,5" Content="{x:Static resx:Resource.AddImageLabel}"/> <TextBox Grid.Row="1" Grid.Column="1" HorizontalAlignment="Stretch" VerticalAlignment="Bottom" Name="newImagePath" Margin="0" /> <Button Grid.Row="1" Grid.Column="2" HorizontalAlignment="Left" VerticalAlignment="Bottom" Name="newImagePathButton" Content="{x:Static resx:Resource.BrowseImageButton}" Click="newImagePathButton_Click" /> ... <ListBox Grid.Column="0" Grid.ColumnSpan="4" Grid.Row="3" HorizontalAlignment="Stretch" Name="thumbnailList" VerticalAlignment="Bottom" IsSynchronizedWithCurrentItem="True" ItemsSource="{Binding BindingGroupName=Images}" SelectedIndex="0" Background="#FFE0E0E0" Height="110"> ... Code behind: private void newImagePathButton_Click(object sender, RoutedEventArgs e) { string imagePath = newImagePath.Text; albumCreator.addImage(imagePath, null); //Reset import image elements to default newImagePath.Text = ""; //Refresh thumbnail listbox Images.Refresh(); Console.WriteLine("Image added!"); } public void addImage(string source, XmlElement parent) { if (parent == null) { //Use default album parent = (XmlElement)root.FirstChild; } //Create image element with source element within XmlElement newImage = xmlDoc.CreateElement(null, "image", null); XmlElement newSource = xmlDoc.CreateElement(null, "source", null); newSource.InnerText = source; newImage.AppendChild(newSource); //Add image element to parent parent.AppendChild(newImage); xmlDoc.Save(xmlFile); } Thank you very much for any help!

    Read the article

  • Windows 7 .NET 3.5.1 - 2.0 Slightly Corrupted, How to Repair?

    - by Quinxy von Besiex
    My Windows 7 included .NET installation (3.5 to 2.0) appears very slightly and particularly corrupted and I am trying to fix it without reinstalling Windows or trying to revert to backups. Everything was working and then my hard drive started corrupting a few files and checkdisk found bad clusters so I imaged the drive to a new one. As soon as I booted on the new drive everything worked except programs which call the System.Net.NetworkInformation methods within .NET 3.5 to 2.0 (like Ping() and IsNetworkAvailable()), which immediately crash the app in which the calls are (those calls in .NET 4.0 works fine). Those methods are found inside System.dll, and I assume call native methods which I believe are inside winnsi.dll or iphlpapi.dll or something else (I've not found this yet); I assume it calls native methods because the exception which causes the crash is Fatal Execution Engine Error which people mention is usually related to calling native methods and marshaling data between them. A huge clue about the culprit is likely found in the fact that when I launch the exact same crashing application through a code profiler (which executes the exe and captures stats on which methods took the longest) the app works fine, no crash at all! How could running it within the profiler work and running it outside not work? That seems the key to the mystery. I've used procmon to catch all the registry, filesystem, and network events from the crashing execution and the profiler-run successful execution and compared the two outputs but didn't learn much (I see the moment at which the non-profiled app crashes, but up until then they behave the same, loaded the same modules, ). The only big difference seems to be that at the moment before the app crash the profiler-executed code creates 4-6 new threads and the directly executed code only creates 1-2. I have diffed the files/directories which seemed most relevant (the .NET stuff under Windows and Program Files) pre- and post- disk trouble and seen no changes where I didn't expect any (no obvious file corruption). I have diffed the software and system registry hives pre- and post- disk trouble and seen no changes which seemed relevant. I have created a new user account and cleaned up any environment variables in case environment was related. No change. I did "sfc /scannow" and it found no integrity problems. I tried "ngen update" to regenerate pre-compiled code in case I missed something that might be damaged and nothing changed. I assume I need to repair my .NET installation but because Windows 7 included .NET 3.5 - 2.0 you can't just re-run a .NET installer to redo it. I do not have access to the Windows disks to try to re-install Windows over itself (the computer has a recovery partition but it is unusable); also the drive uses a whole-disk encryption solution and re-installing would be difficult. I absolutely do not want to start from scratch here and install a fresh Windows, reinstall dozens of software packages, try and remember dozens of development-related customizations/etc. Given all that... does anyone have any helpful advice? I need .NET 3.5 - 2.0 working as I am a developer and need to build and test against it. Thanks! Quinxy

    Read the article

  • SBS2003 to SBS2011 Migration - Installation Error

    - by Shawn Gradwell
    Microsoft Small Business Server 2003 to 2011 Migration. I followed the Migration Guide from Microsoft and the source server had no errors when running the various tests prior to the migration. I have completed the destination server setup using the Answer File and the server is up and running. It all looks good, I can access Exchange and AD and the only problem is the error message when you log in stating that the setup did not complete and to check the logs. Because all looks good I am continuing the migration to the destination server. I also have to state that this client does not use Sharepoint at all. Do I have to redo everything? Herewith the logs: [4992] 121016.225454.5905: Task: Starting Add User or Group access VSS registry. [4992] 121016.225454.7645: TaskManagement: In TaskScheduler.RunTasks(): The "ConfigureSharePointVSSRegistryTask" Task threw an Exception during the Run() call:System.Security.Principal.IdentityNotMappedException: Some or all identity references could not be translated. at System.Security.Principal.NTAccount.Translate(IdentityReferenceCollection sourceAccounts, Type targetType, Boolean forceSuccess) at System.Security.Principal.NTAccount.Translate(Type targetType) at System.Security.AccessControl.CommonObjectSecurity.ModifyAccess(AccessControlModification modification, AccessRule rule, Boolean& modified) at System.Security.AccessControl.CommonObjectSecurity.AddAccessRule(AccessRule rule) at Microsoft.WindowsServerSolutions.IWorker.Tasks.ConfigureSharePointVSSRegistryTask.AddUsersToAccessRegistry(List`1 names) at Microsoft.WindowsServerSolutions.IWorker.Tasks.ConfigureSharePointVSSRegistryTask.Run(ITaskDataLink dl) at Microsoft.WindowsServerSolutions.TaskManagement.Data.Task.Run(ITaskDataLink dataLink) at Microsoft.WindowsServerSolutions.TaskManagement.TaskScheduler.RunTasks(String taskListId, String stateFileName) [4992] 121016.225454.7655: Setup: An error was encountered on the TME thread: System.Security.Principal.IdentityNotMappedException: Some or all identity references could not be translated. at System.Security.Principal.NTAccount.Translate(IdentityReferenceCollection sourceAccounts, Type targetType, Boolean forceSuccess) at System.Security.Principal.NTAccount.Translate(Type targetType) at System.Security.AccessControl.CommonObjectSecurity.ModifyAccess(AccessControlModification modification, AccessRule rule, Boolean& modified) at System.Security.AccessControl.CommonObjectSecurity.AddAccessRule(AccessRule rule) at Microsoft.WindowsServerSolutions.IWorker.Tasks.ConfigureSharePointVSSRegistryTask.AddUsersToAccessRegistry(List`1 names) at Microsoft.WindowsServerSolutions.IWorker.Tasks.ConfigureSharePointVSSRegistryTask.Run(ITaskDataLink dl) at Microsoft.WindowsServerSolutions.TaskManagement.Data.Task.Run(ITaskDataLink dataLink) at Microsoft.WindowsServerSolutions.TaskManagement.TaskScheduler.RunTasks(String taskListId, String stateFileName) at Microsoft.WindowsServerSolutions.Setup.SBSSetup.ProgressPagePresenter._RunTasks(Object sender, DoWorkEventArgs e) [4956] 121016.225455.0685: Setup: _UnhandledExceptionHandler: Setup encountered an error: System.Reflection.TargetInvocationException: Exception has been thrown by the target of an invocation. ---> System.Reflection.TargetInvocationException: The TME thread failed (see the inner exception). ---> System.Security.Principal.IdentityNotMappedException: Some or all identity references could not be translated. at System.Security.Principal.NTAccount.Translate(IdentityReferenceCollection sourceAccounts, Type targetType, Boolean forceSuccess) at System.Security.Principal.NTAccount.Translate(Type targetType) at System.Security.AccessControl.CommonObjectSecurity.ModifyAccess(AccessControlModification modification, AccessRule rule, Boolean& modified) at System.Security.AccessControl.CommonObjectSecurity.AddAccessRule(AccessRule rule) at Microsoft.WindowsServerSolutions.IWorker.Tasks.ConfigureSharePointVSSRegistryTask.AddUsersToAccessRegistry(List`1 names) at Microsoft.WindowsServerSolutions.IWorker.Tasks.ConfigureSharePointVSSRegistryTask.Run(ITaskDataLink dl) at Microsoft.WindowsServerSolutions.TaskManagement.Data.Task.Run(ITaskDataLink dataLink) at Microsoft.WindowsServerSolutions.TaskManagement.TaskScheduler.RunTasks(String taskListId, String stateFileName) at Microsoft.WindowsServerSolutions.Setup.SBSSetup.ProgressPagePresenter._RunTasks(Object sender, DoWorkEventArgs e) at System.ComponentModel.BackgroundWorker.WorkerThreadStart(Object argument) --- End of inner exception stack trace --- at Microsoft.WindowsServerSolutions.Setup.SBSSetup.ProgressPagePresenter.TasksCompleted(Object sender, RunWorkerCompletedEventArgs e) --- End of inner exception stack trace --- at System.RuntimeMethodHandle._InvokeMethodFast(IRuntimeMethodInfo method, Object target, Object[] arguments, SignatureStruct& sig, MethodAttributes methodAttributes, RuntimeType typeOwner) at System.Reflection.RuntimeMethodInfo.Invoke(Object obj, BindingFlags invokeAttr, Binder binder, Object[] parameters, CultureInfo culture, Boolean skipVisibilityChecks) at System.Delegate.DynamicInvokeImpl(Object[] args) at System.Windows.Forms.Control.InvokeMarshaledCallbackDo(ThreadMethodEntry tme) at System.Windows.Forms.Control.InvokeMarshaledCallbackHelper(Object obj) at System.Threading.ExecutionContext.runTryCode(Object userData) at System.Runtime.CompilerServices.RuntimeHelpers.ExecuteCodeWithGuaranteedCleanup(TryCode code, CleanupCode backoutCode, Object userData) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean ignoreSyncCtx) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Windows.Forms.Control.InvokeMarshaledCallback(ThreadMethodEntry tme) at System.Windows.Forms.Control.InvokeMarshaledCallbacks() at System.Windows.Forms.Control.WndProc(Message& m) at System.Windows.Forms.NativeWindow.DebuggableCallback(IntPtr hWnd, Int32 msg, IntPtr wparam, IntPtr lparam) at System.Windows.Forms.UnsafeNativeMethods.DispatchMessageW(MSG& msg) at System.Windows.Forms.Application.ComponentManager.System.Windows.Forms.UnsafeNativeMethods.IMsoComponentManager.FPushMessageLoop(IntPtr dwComponentID, Int32 reason, Int32 pvLoopData) at System.Windows.Forms.Application.ThreadContext.RunMessageLoopInner(Int32 reason, ApplicationContext context) at System.Windows.Forms.Application.ThreadContext.RunMessageLoop(Int32 reason, ApplicationContext context) at Microsoft.WindowsServerSolutions.Common.Wizards.Framework.WizardChainEngine.Launch() at Microsoft.WindowsServerSolutions.Setup.SBSSetup.MainClass._LaunchWizard() at Microsoft.WindowsServerSolutions.Setup.SBSSetup.MainClass.RealMain(String[] args) at Microsoft.WindowsServerSolutions.Setup.SBSSetup.MainClass.Main(String[] args) [4956] 121016.225455.0865: Setup: Removed the password. [4956] 121016.225455.0905: Setup: Deleting scheduled task at path Microsoft\Windows\Windows Small Business Server 2011 Standard with name Setup [4956] 121016.225455.8055: Setup: Removed SBSSetup from the RunOnce.

    Read the article

  • e2fsck / resize2fs problems

    - by BlakBat
    I've got 6 drives (each 1.5T, all same model and firmware revision) that are part of a RAID5 array. The RAID5 makes a LVM volume group and a logical group. The latter contains only one ext3 partition. I've recently ran: e2fsck -f /dev/vg03/lv01 && resize2fs -M /dev/vg03/lv01 which exited without an error. Now when I try to mount /dev/vg03/lv01 I get: EXT3-fs error (device dm-0): ext3_check_descriptors: Block bitmap for group 30533 not in group (block 1000532368)! EXT3-fs: group descriptors corrupted! How do I get out of this predicament? This is all the info I can currently give you: fdisk -l /dev/sd[cdefgh] shows (correctly) that they are "Linux raid autodetect" but fdisk now shows: fdisk -l /dev/md0 Disk /dev/md0: 7501.5 GB, 7501495664640 bytes ... Disk identifier: 0x00000000 Disk /dev/md0 doesn't contain a valid partition table (instead of a LVM type partition) fdisk -l /dev/vg03/lv01 Disk /dev/vg03/lv01: 7501.5 GB, 7501491732480 bytes ... Disk identifier: 0x00000000 Disk /dev/vg03/lv01 doesn't contain a valid partition table (instead of a ext3 type partition) I've tried: e2fsck -fy /dev/vg03/lv01 e2fsck 1.41.12 (17-May-2010) e2fsck: Group descriptors look bad... trying backup blocks... Block bitmap for group 30533 is not in group. (block 1000532368) Relocate? yes Inode bitmap for group 30533 is not in group. (block 1000532369) Relocate? yes Pass 1: Checking inodes, blocks, and sizes Relocating group 30533's block bitmap to 1000524246... Error allocating 1 contiguous block(s) in block group 30533 for inode bitmap: Could not allocate block in ext2 filesystem e2fsck: aborted Extra information I can give you: cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : active (auto-read-only) raid5 sdg1[0] sdh1[5] sdf1[4] sde1[3] sdc1[2] sdd1[1] 7325679360 blocks level 5, 128k chunk, algorithm 2 [6/6] [UUUUUU] bitmap: 1/175 pages [4KB], 4096KB chunk unused devices: Lastly, all smartctl tests (short and extendend) showed no errors on any of the disks. Should I try to resize2fs to grow /dev/vg03/lv01 and redo a e2fsck ? Should I cfdisk /dev/md0 and /dev/vg03/lv01 back to their real types? Thanks in advance for all and any help. 2011-09-20 UPDATE I issued the following commands and was able to remount the partition, but by viewing the size (df) of before and after, it seems that 1Tb of data have gone missing. By checking the MD5SUMS (from an old backup) of some files with the "same" files from the remounted partition, some errors have been detected. Commands issued to remount the partition were: dumpe2fs /dev/vg03/lv01 Block count: 1000491435<br /> Block size: 4096<br /> tune2fs -O ^has_journal /dev/vg03/lv01 resize2fs -p /dev/vg03/lv01 dumpe2fs /dev/vg03/lv01 Block count: 1831418880<br /> Block size: 4096<br /> mount -o ro,noatime /dev/vg03/lv01 /mnt/raid OK... but files have been damaged / gone missing.

    Read the article

  • CodePlex Daily Summary for Monday, February 22, 2010

    CodePlex Daily Summary for Monday, February 22, 2010New ProjectsAVDB: System to keep track of orders and the inventory of televisions, DVDs, VCRs etcBooky: Booky is an online Bookmark Management Tool. Gear Up for Lord of the Rings Online (lotro): Windows utility for checking what your LOTRO character currently has equipped and figuring out gear you should get to improve your stats.GotSharp Extensions: GotSharp Extensions is a set of helpful classes and extension methods that can make your coding experience easier and cleaner. Halfwit: A minimalist WPF Twitter client.HOA Starter Kit: A community subdivision website starter kit. First draft.Lua For Irony: Project to define the Lua language using the Irony (http://irony.codeplex.com/) development kit. This work is based heavily on the work done for V...MimeCloud: Scalable .NET Digital Asset & Media Management: MimeCloud is a scalable digital asset library & media management toolset. Founded by Alex Norcliffe and Peter Miller Written by people who have b...Parallel Mandelbrot Set solver: Solving the Mandelbrot set using the Parallel class in .NET 4.0. Showing the resulting image in a WPF application. The solution file requires VS 2010.Pomogad - Pomodoro Windows Gadget: Você usa Pomodoro Technique? Não sabe o que é? Veja aqui http://www.pomodorotechnique.com Agora que você já sabe, que tal usar essa técnica? E p...PostCrap - flyweight .NET AOP post compiler: PostCrap is a flyweight attribute based aspect injection .NET post compiler It is written in C# and uses Mono.Cecil to modify assemblies and injec...Software + Service Reference Demo Kit: MS China Developer and Platform Evangelism team created an End-2-End demo for Software + Service. Yet Another SharePoint Tool: YEAST provides you with a simple to integrate approach to generating SharePoint solution packages as part of a Visual Studio project. Zen Coding Visual Studio Plugin: Zen Coding for Visual Studio is plugin for HTML and CSS hi-speed codingNew Releases.Net MSBuild Google Closure Compiler Task: .Net MSBuild Google Closure Compiler Task 1.1: - Corrected issue with regular expression source file and renamingdotNails: dotNails_0.5.9: NOTE - the latest source code has been moved to google code to take advantage of Mercurial source control - http://code.google.com/p/dotnails/sourc...EasyWFUnit: EasyWFUnit-2.2: Release 2.2 of EasyWFUnit, an extension library to support unit testing of Windows Workflow, includes a revised WinForm GUI Test Builder that utili...Fluent Ribbon Control Suite: Fluent Ribbon Control Suite BETA2 (for .NET 4.0RC): Includes Fluent.dll (with .pdb and .xml) and test application compiled with .NET 4.0 RC.FolderSize: FolderSize.Win32.1.0.3.0: FolderSize.Win32.1.0.3.0 A simple utility intended to be used to scan harddrives for the folders that take most place and display this to the user...Fusion Charts Free for SharePoint: 1.3: Fix release for issue #11833 : Feature Must Be Activated on Root of Web Application.GotSharp Extensions: 1.0: First release, containing only a few extension methods for the System.String and System.IO.Stream classes, and a Range utility class.Jeremy's Experimental Repository: FluentValidation with IoC Sample: Sample code for the blog post Using FluentValidation with an IoC containerMiniTwitter: 1.08: MiniTwitter 1.08 更新内容 修正 自動更新が CodePlex の変更で動いていなかった問題を修正 自動更新に失敗すると落ちるバグを修正 通知領域アイコン右クリックで表示されるメニューが消えないバグを修正 変更 ハッシュタグの抽出条件を変更 API のエンドポイ...MSTS Editors & Tools: Simis Editor v0.3: Simis Editor v0.3 Enabled Edit > Undo and Edit > Redo. Undoing/redoing back to last saved state is identified as saved (no prompt on exit, etc.)....Parallel Mandelbrot Set solver: Alpha 1: First releaseParallelTasks: ParallelTasks 2.0 beta1: ParallelTasks 2.0 is a total re-write of the original version. Featuring improved performance and stability and a more consistent API.Personal Expense Tracker: Personal Expense Tracker v0.1 beta: This is the first beta release. Please provide me with your feedback.PostCrap - flyweight .NET AOP post compiler: PostCrap 1.0 AOP source and binaries: PostCrap 1.0 source and binaries (the unit test project contains sample interceptor attributes for exception handling & logging)Protoforma | Tactica Adversa: Skilful 0.1.3.276: AlphaRawr: Rawr 2.3.10: - More improvements to the default filters - Further improvement on avoiding useless gem swaps from the Optimizer. - Normal/Heroic ICC items shou...Reusable Library: v1.0.2: A collection of reusable abstractions for enterprise application developer.Sem.Sync: 2010-02-21 - Synchronization Manager - Beta: This release is not tested very well, so you should use this version only to evaluate new features. - Changed way of handling source-ids in order ...Survey - web survey & form engine: Survey 1.1.0: Release Survey v. 1.1.0.0 Major changes: - layout & graphics completely overhauled - several technical changes & repairs (e.g. matrix question iss...Yet Another SharePoint Tool: Version 1: Version 1Zeta Resource Editor: Release 2010-02-21: New source code release.Most Popular ProjectsWBFS ManagerRawrAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)Image Resizer Powertoy Clone for WindowsASP.NETDotNetNuke® Community EditionMicrosoft SQL Server Community & SamplesMost Active ProjectsDinnerNow.netRawrBlogEngine.NETNB_Store - Free DotNetNuke Ecommerce Catalog ModuleSharpyjQuery Library for SharePoint Web ServicesSharePoint ContribInfoServicepatterns & practices – Enterprise LibraryPHPExcel

    Read the article

  • 24+ Coda Alternatives for Windows and Linux

    - by Matt
    Coda plays an important role in designing layout on Mac. There are numerous coda alternatives for windows and Linux too. It is not possible to describe each and everyone so some of the coda alternatives, which work on both windows and Linux platforms, are discussed below. EditPlus $35.00 Good thing about EditPlus is that it highlights URLs and email addresses, activating them when you ‘crtl + double-click’. It also has a built in browser for previewing HTML, and FTP and SFTP support. Also supports Macros and RegEx find and replace. UltraEdit $49.99 It is another good coda alternative for windows and Linux. It is the best suited editor for text, HTML and HEX. It also plays an advanced PHP, Perl, Java and JavaScript editor for programmers. It supports disk-based 64-bit or standard file handling on 32-bit Windows platforms or window 2000 and later versions. HippoEdit $39.95 HippoEDIT has the best autocomplete it gives pop a ‘tooltip’ above your cursor as you type, suggesting words you’ve already typed. It does syntax highlighting for over 2 dozen language. Sublime Text $59.00 Sublime Text awesome ‘zoomed out’ view of the file lets you focus on the area you want. It lets you open a local file when you right-click on its link, and there are a few automation features, so this would make a solid choice of a text editor. Textpad $24.70 TextPad is simple editor with nifty features such as column select, drag-and-drop text between files, and hyperlink support. It also supports large files. Aptana Free Aptana Studio is one of the best editors working on both windows and Linux. It is a complete web development setting that has a nice blend of powerful authoring tools with a collection of online hosting and collaboration services. It is quite helpful as it support for PHP, CSS, FTP, and more. SciTE Free It is a SCIntilla based Text Editor. It has gradually developed as a generally useful editor. It provides for building and running programs. It is best to be used for jobs with simple configurations. SciTE is currently available for Intel Win32 and Linux compatible operating systems with GTK+. It has been run on Windows XP and on Fedora 8 and Ubuntu 7.10 with GTK+ 2.12 E Text Editor $34.96 E Text Editor is a new text editor for Windows, which also works on Linux as well. It has powerful editing features and also some unique abilities. It makes text manipulation quite fast and easy, and makes user focus on his writing as it automatically does all the manual work. It can be extend it in any language. It supports Text Mate bundles, thus allows the user to tap into a huge and active community. Editra Free Editra is an upcoming editor, with some fantastic features such as user profiles, auto-completion, session saving, and syntax highlighing for 60+ languages. Plugins can extend the feature set, offering an integrated python console, FTP client, file browser, and calculator, among others. PSPad Free PSPad is a good Template for writing CSS, as it an internal web browser, and a macro recorder to the table. It also supports hex editing, and some degree of code compiling. JEdit Free It is a mature programmer’s text editor and has taken a good deal of time to be developed as it is today. It is better than many costlier development tools due to its features and simplicity of use. It has been released as free software with full source code, provided under the terms of the GPL 2.0. Which also adds to its attractiveness. NEdit Free It is a multi-purpose text editor for the X Window System, which also works on Linux. It combines a standard, easy to use, graphical user interface with the full functionality and stability required by users who edit text for long period a day. It also provides for thorough support for development in various languages. It also facilitates the use of text processors, and other tools at the same time. It can be used productively by anyone who needs to edit text. It is quite a user-friendly tool. Its salient features include syntax highlighting with built in pattern, auto indent, tab emulation, block indentation adjustment etc. As of version 5.1, NEdit may be freely distributed under the terms of the GNU General Public License. MadEdit Free Mad Edit is an Open-Source and Cross-Platform Text/Hex Editor. It is written in C++ and wxWidgets. MadEdit can edit files in Text/Column/Hex modes. It also supports many useful functions, such as Syntax Highlighting, Word Wrap, Encoding for UTF8/16/32,and others. It also supports word count, which makes it quite a useful text editor for both windows and Linux. It has been recently modified on 10/09/2010. KompoZer Free Kompozer is a complete web authoring system that has a combination of web file management and easy-to-use WYSIWYG web page editing. KompoZer has been designed to be completely and extensively easy to use. It is thus an ideal tool for non-technical computer users who want to create an attractive, professional-looking web site without knowing HTML or web coding. It is based on the NVU source code. Vim Free Vim or “Vi IMproved” is an advanced text editor. Its salient features are syntax highlighting, word completion and it also has a huge amount of contributed content. Vim has several “modes” on offer for editing, which adds to the efficiency in editing. Thus it becomes a non-user-friendly application but it is also strength for its users. The normal mode binds alphanumeric keys to task-oriented commands. The visual mode highlights text. More tools for search & replace, defining functions, etc. are offered through command line mode. Vim comes with complete help. NotePad ++ Free One of the the best free text editor for Windows out there; with support for simple things—like syntax highlighting and folding—all the way up to FTP, Notepad++ should tick most of the boxes Notepad2 Free Notepad2 is also based on the Scintilla editing engine, but it’s much simpler than Notepad++. It bills itself as being fast, light-weight, and Notepad-like. Crimson Editor Free Crimson Editor has the ability to edit remote files, using a built-in FTP client; there’s also a spell checker. TotalEdit Free TotalEdit allows file comparison, RegEx search and replace, and has multiple options for file backup / versioning. For cleanup, it offers (X)HTML and XML customizable formatting, and a spell checker. In-Type Free ConTEXT Free SourceEdit Free SourceEdit includes features such as clipboard history, syntax highlighting and autocompletion for a decent set of languages. A hex editor and FTP client. RJ TextED Free RJ TextED supports integration with TopStyle Lite. Provides HTML validation and formatting. It includes an FTP client, a file browser, and a code browser, as well as a character map and support for email. GEDIT Free It is one of the best coda alternatives for windows and Linux. It has syntax highlighting and is best suitable for programming. It has many attractive features such as full support for UTF-8, undo/redo, and clipboard support, search and replace, configurable syntax highlighting for various languages and many more supportive features. It is extensible with plug ins. Other important coda alternatives for windows and Linux are Redcar, Bluefish Editor, NVU, Ruby Mine, Slick Edit, Geany, Editra, txt2html and CSSED. There are many more. Its up to user to decide which one suits best to his requirements. Related posts:10 Useful Text Editor For Developer Applications to Install & Run Windows on Linux Open Source WYSIWYG Text Editors

    Read the article

  • GoldenGate 12c Trail Encryption and Credentials with Oracle Wallet

    - by hamsun
    I have been asked more than once whether the Oracle Wallet supports GoldenGate trail encryption. Although GoldenGate has supported encryption with the ENCKEYS file for years, Oracle GoldenGate 12c now also supports encryption using the Oracle Wallet. This helps improve security and makes it easier to administer. Two types of wallets can be configured in Oracle GoldenGate 12c: The wallet that holds the master keys, used with trail or TCP/IP encryption and decryption, stored in the new 12c dirwlt/cwallet.sso file.   The wallet that holds the Oracle Database user IDs and passwords stored in the ‘credential store’ stored in the new 12c dircrd/cwallet.sso file.   A wallet can be created using a ‘create wallet’  command.  Adding a master key to an existing wallet is easy using ‘open wallet’ and ‘add masterkey’ commands.   GGSCI (EDLVC3R27P0) 42> open wallet Opened wallet at location 'dirwlt'. GGSCI (EDLVC3R27P0) 43> add masterkey Master key 'OGG_DEFAULT_MASTERKEY' added to wallet at location 'dirwlt'.   Existing GUI Wallet utilities that come with other products such as the Oracle Database “Oracle Wallet Manager” do not work on this version of the wallet. The default Oracle Wallet can be changed.   GGSCI (EDLVC3R27P0) 44> sh ls -ltr ./dirwlt/* -rw-r----- 1 oracle oinstall 685 May 30 05:24 ./dirwlt/cwallet.sso GGSCI (EDLVC3R27P0) 45> info masterkey Masterkey Name:                 OGG_DEFAULT_MASTERKEY Creation Date:                  Fri May 30 05:24:04 2014 Version:        Creation Date:                  Status: 1               Fri May 30 05:24:04 2014        Current   The second wallet file is used for the credential used to connect to a database, without exposing the user id or password. Once it is configured, this file can be copied so that credentials are available to connect to the source or target database.   GGSCI (EDLVC3R27P0) 48> sh cp ./dircrd/cwallet.sso $GG_EURO_HOME/dircrd GGSCI (EDLVC3R27P0) 49> sh ls -ltr ./dircrd/* -rw-r----- 1 oracle oinstall 709 May 28 05:39 ./dircrd/cwallet.sso   The encryption wallet file can also be copied to the target machine so the replicat has access to the master key to decrypt records that are encrypted in the trail. Similar to the old ENCKEYS file, the master keys wallet created on the source host must either be stored in a centrally available disk or copied to all GoldenGate target hosts. The wallet is in a platform-independent format, although it is not certified for the iSeries, z/OS, and NonStop platforms.   GGSCI (EDLVC3R27P0) 50> sh cp ./dirwlt/cwallet.sso $GG_EURO_HOME/dirwlt   The new 12c UserIdAlias parameter is used to locate the credential in the wallet so the source user id and password does not need to be stored as a parameter as long as it is in the wallet.   GGSCI (EDLVC3R27P0) 52> view param extwest extract extwest exttrail ./dirdat/ew useridalias gguamer table west.*; The EncryptTrail parameter is used to encrypt the trail using the Advanced Encryption Standard and can be used with a primary extract or pump extract. GGSCI (EDLVC3R27P0) 54> view param pwest extract pwest encrypttrail AES256 rmthost easthost, mgrport 15001 rmttrail ./dirdat/pe passthru table west.*;   Once the extracts are running, records can be encrypted using the wallet.   GGSCI (EDLVC3R27P0) 60> info extract *west EXTRACT    EXTWEST   Last Started 2014-05-30 05:26   Status RUNNING Checkpoint Lag       00:00:17 (updated 00:00:01 ago) Process ID           24982 Log Read Checkpoint  Oracle Integrated Redo Logs                      2014-05-30 05:25:53                      SCN 0.0 (0) EXTRACT    PWEST     Last Started 2014-05-30 05:26   Status RUNNING Checkpoint Lag       24:02:32 (updated 00:00:05 ago) Process ID           24983 Log Read Checkpoint  File ./dirdat/ew000004                      2014-05-29 05:23:34.748949  RBA 1483   The ‘info masterkey’ command is used to confirm the wallet contains the key after copying it to the target machine. The key is needed to decrypt the data in the trail before the replicat applies the changes to the target database.   GGSCI (EDLVC3R27P0) 41> open wallet Opened wallet at location 'dirwlt'. GGSCI (EDLVC3R27P0) 42> info masterkey Masterkey Name:                 OGG_DEFAULT_MASTERKEY Creation Date:                  Fri May 30 05:24:04 2014 Version:        Creation Date:                  Status: 1               Fri May 30 05:24:04 2014        Current   Once the replicat is running, records can be decrypted using the wallet.   GGSCI (EDLVC3R27P0) 44> info reast REPLICAT   REAST     Last Started 2014-05-30 05:28   Status RUNNING INTEGRATED Checkpoint Lag       00:00:00 (updated 00:00:02 ago) Process ID           25057 Log Read Checkpoint  File ./dirdat/pe000004                      2014-05-30 05:28:16.000000  RBA 1546   There is no need for the DecryptTrail parameter when using the Oracle Wallet, unlike when using the ENCKEYS file.   GGSCI (EDLVC3R27P0) 45> view params reast replicat reast assumetargetdefs discardfile ./dirrpt/reast.dsc, purge useridalias ggueuro map west.*, target east.*;   Once a record is inserted into the source table and committed, the encryption can be verified using logdump and then querying the target table.   AMER_SQL>insert into west.branch values (50, 80071); 1 row created.   AMER_SQL>commit; Commit complete.   The following encrypted record can be found using logdump. Logdump 40 >n 2014/05/30 05:28:30.001.154 Insert               Len    28 RBA 1546 Name: WEST.BRANCH After  Image:                                             Partition 4   G  s    0a3e 1ba3 d924 5c02 eade db3f 61a9 164d 8b53 4331 | .>...$\....?a..M.SC1   554f e65a 5185 0257                               | UO.ZQ..W  Bad compressed block, found length of  7075 (x1ba3), RBA 1546   GGS tokens: TokenID x52 'R' ORAROWID         Info x00  Length   20  4141 4157 7649 4141 4741 4141 4144 7541 4170 0001 | AAAWvIAAGAAAADuAAp..  TokenID x4c 'L' LOGCSN           Info x00  Length    7  3231 3632 3934 33                                 | 2162943  TokenID x36 '6' TRANID           Info x00  Length   10  3130 2e31 372e 3135 3031                          | 10.17.1501  The replicat automatically decrypted this record from the trail and then inserted the row to the target table using the wallet. This select verifies the row was inserted into the target database and the data is not encrypted. EURO_SQL>select * from branch where branch_number=50; BRANCH_NUMBER                  BRANCH_ZIP -------------                                   ----------    50                                              80071   Book a seat in an upcoming Oracle GoldenGate 12c: Fundamentals for Oracle course now to learn more about GoldenGate 12c new features including how to use GoldenGate with the Oracle wallet, credentials, integrated extracts, integrated replicats, the Oracle Universal Installer, and other new features. Looking for another course? View all Oracle GoldenGate training.   Randy Richeson joined Oracle University as a Senior Principal Instructor in March 2005. He is an Oracle Certified Professional (10g-12c) and a GoldenGate Certified Implementation Specialist (10-11g). He has taught GoldenGate since 2010 and also has experience teaching other technical curriculums including GoldenGate Monitor, Veridata, JD Edwards, PeopleSoft, and the Oracle Application Server.

    Read the article

  • MySQL Cluster 7.2: Over 8x Higher Performance than Cluster 7.1

    - by Mat Keep
    0 0 1 893 5092 Homework 42 11 5974 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin; mso-ansi-language:EN-US;} Summary The scalability enhancements delivered by extensions to multi-threaded data nodes enables MySQL Cluster 7.2 to deliver over 8x higher performance than the previous MySQL Cluster 7.1 release on a recent benchmark What’s New in MySQL Cluster 7.2 MySQL Cluster 7.2 was released as GA (Generally Available) in February 2012, delivering many enhancements to performance on complex queries, new NoSQL Key / Value API, cross-data center replication and ease-of-use. These enhancements are summarized in the Figure below, and detailed in the MySQL Cluster New Features whitepaper Figure 1: Next Generation Web Services, Cross Data Center Replication and Ease-of-Use Once of the key enhancements delivered in MySQL Cluster 7.2 is extensions made to the multi-threading processes of the data nodes. Multi-Threaded Data Node Extensions The MySQL Cluster 7.2 data node is now functionally divided into seven thread types: 1) Local Data Manager threads (ldm). Note – these are sometimes also called LQH threads. 2) Transaction Coordinator threads (tc) 3) Asynchronous Replication threads (rep) 4) Schema Management threads (main) 5) Network receiver threads (recv) 6) Network send threads (send) 7) IO threads Each of these thread types are discussed in more detail below. MySQL Cluster 7.2 increases the maximum number of LDM threads from 4 to 16. The LDM contains the actual data, which means that when using 16 threads the data is more heavily partitioned (this is automatic in MySQL Cluster). Each LDM thread maintains its own set of data partitions, index partitions and REDO log. The number of LDM partitions per data node is not dynamically configurable, but it is possible, however, to map more than one partition onto each LDM thread, providing flexibility in modifying the number of LDM threads. The TC domain stores the state of in-flight transactions. This means that every new transaction can easily be assigned to a new TC thread. Testing has shown that in most cases 1 TC thread per 2 LDM threads is sufficient, and in many cases even 1 TC thread per 4 LDM threads is also acceptable. Testing also demonstrated that in some instances where the workload needed to sustain very high update loads it is necessary to configure 3 to 4 TC threads per 4 LDM threads. In the previous MySQL Cluster 7.1 release, only one TC thread was available. This limit has been increased to 16 TC threads in MySQL Cluster 7.2. The TC domain also manages the Adaptive Query Localization functionality introduced in MySQL Cluster 7.2 that significantly enhanced complex query performance by pushing JOIN operations down to the data nodes. Asynchronous Replication was separated into its own thread with the release of MySQL Cluster 7.1, and has not been modified in the latest 7.2 release. To scale the number of TC threads, it was necessary to separate the Schema Management domain from the TC domain. The schema management thread has little load, so is implemented with a single thread. The Network receiver domain was bound to 1 thread in MySQL Cluster 7.1. With the increase of threads in MySQL Cluster 7.2 it is also necessary to increase the number of recv threads to 8. This enables each receive thread to service one or more sockets used to communicate with other nodes the Cluster. The Network send thread is a new thread type introduced in MySQL Cluster 7.2. Previously other threads handled the sending operations themselves, which can provide for lower latency. To achieve highest throughput however, it has been necessary to create dedicated send threads, of which 8 can be configured. It is still possible to configure MySQL Cluster 7.2 to a legacy mode that does not use any of the send threads – useful for those workloads that are most sensitive to latency. The IO Thread is the final thread type and there have been no changes to this domain in MySQL Cluster 7.2. Multiple IO threads were already available, which could be configured to either one thread per open file, or to a fixed number of IO threads that handle the IO traffic. Except when using compression on disk, the IO threads typically have a very light load. Benchmarking the Scalability Enhancements The scalability enhancements discussed above have made it possible to scale CPU usage of each data node to more than 5x of that possible in MySQL Cluster 7.1. In addition, a number of bottlenecks have been removed, making it possible to scale data node performance by even more than 5x. Figure 2: MySQL Cluster 7.2 Delivers 8.4x Higher Performance than 7.1 The flexAsynch benchmark was used to compare MySQL Cluster 7.2 performance to 7.1 across an 8-node Intel Xeon x5670-based cluster of dual socket commodity servers (6 cores each). As the results demonstrate, MySQL Cluster 7.2 delivers over 8x higher performance per data nodes than MySQL Cluster 7.1. More details of this and other benchmarks will be published in a new whitepaper – coming soon, so stay tuned! In a following blog post, I’ll provide recommendations on optimum thread configurations for different types of server processor. You can also learn more from the Best Practices Guide to Optimizing Performance of MySQL Cluster Conclusion MySQL Cluster has achieved a range of impressive benchmark results, and set in context with the previous 7.1 release, is able to deliver over 8x higher performance per node. As a result, the multi-threaded data node extensions not only serve to increase performance of MySQL Cluster, they also enable users to achieve significantly improved levels of utilization from current and future generations of massively multi-core, multi-thread processor designs.

    Read the article

  • Improved Performance on PeopleSoft Combined Benchmark using SPARC T4-4

    - by Brian
    Oracle's SPARC T4-4 server running Oracle's PeopleSoft HCM 9.1 combined online and batch benchmark achieved a world record 18,000 concurrent users experiencing subsecond response time while executing a PeopleSoft Payroll batch job of 500,000 employees in 32.4 minutes. This result was obtained with a SPARC T4-4 server running Oracle Database 11g Release 2, a SPARC T4-4 server running PeopleSoft HCM 9.1 application server and a SPARC T4-2 server running Oracle WebLogic Server in the web tier. The SPARC T4-4 server running the application tier used Oracle Solaris Zones which provide a flexible, scalable and manageable virtualization environment. The average CPU utilization on the SPARC T4-2 server in the web tier was 17%, on the SPARC T4-4 server in the application tier it was 59%, and on the SPARC T4-4 server in the database tier was 47% (online and batch) leaving significant headroom for additional processing across the three tiers. The SPARC T4-4 server used for the database tier hosted Oracle Database 11g Release 2 using Oracle Automatic Storage Management (ASM) for database files management with I/O performance equivalent to raw devices. Performance Landscape Results are presented for the PeopleSoft HRMS Self-Service and Payroll combined benchmark. The new result with 128 streams shows significant improvement in the payroll batch processing time with little impact on the self-service component response time. PeopleSoft HRMS Self-Service and Payroll Benchmark Systems Users Ave Response Search (sec) Ave Response Save (sec) Batch Time (min) Streams SPARC T4-2 (web) SPARC T4-4 (app) SPARC T4-4 (db) 18,000 0.988 0.539 32.4 128 SPARC T4-2 (web) SPARC T4-4 (app) SPARC T4-4 (db) 18,000 0.944 0.503 43.3 64 The following results are for the PeopleSoft HRMS Self-Service benchmark that was previous run. The results are not directly comparable with the combined results because they do not include the payroll component. PeopleSoft HRMS Self-Service 9.1 Benchmark Systems Users Ave Response Search (sec) Ave Response Save (sec) Batch Time (min) Streams SPARC T4-2 (web) SPARC T4-4 (app) 2x SPARC T4-2 (db) 18,000 1.048 0.742 N/A N/A The following results are for the PeopleSoft Payroll benchmark that was previous run. The results are not directly comparable with the combined results because they do not include the self-service component. PeopleSoft Payroll (N.A.) 9.1 - 500K Employees (7 Million SQL PayCalc, Unicode) Systems Users Ave Response Search (sec) Ave Response Save (sec) Batch Time (min) Streams SPARC T4-4 (db) N/A N/A N/A 30.84 96 Configuration Summary Application Configuration: 1 x SPARC T4-4 server with 4 x SPARC T4 processors, 3.0 GHz 512 GB memory Oracle Solaris 11 11/11 PeopleTools 8.52 PeopleSoft HCM 9.1 Oracle Tuxedo, Version 10.3.0.0, 64-bit, Patch Level 031 Java Platform, Standard Edition Development Kit 6 Update 32 Database Configuration: 1 x SPARC T4-4 server with 4 x SPARC T4 processors, 3.0 GHz 256 GB memory Oracle Solaris 11 11/11 Oracle Database 11g Release 2 PeopleTools 8.52 Oracle Tuxedo, Version 10.3.0.0, 64-bit, Patch Level 031 Micro Focus Server Express (COBOL v 5.1.00) Web Tier Configuration: 1 x SPARC T4-2 server with 2 x SPARC T4 processors, 2.85 GHz 256 GB memory Oracle Solaris 11 11/11 PeopleTools 8.52 Oracle WebLogic Server 10.3.4 Java Platform, Standard Edition Development Kit 6 Update 32 Storage Configuration: 1 x Sun Server X2-4 as a COMSTAR head for data 4 x Intel Xeon X7550, 2.0 GHz 128 GB memory 1 x Sun Storage F5100 Flash Array (80 flash modules) 1 x Sun Storage F5100 Flash Array (40 flash modules) 1 x Sun Fire X4275 as a COMSTAR head for redo logs 12 x 2 TB SAS disks with Niwot Raid controller Benchmark Description This benchmark combines PeopleSoft HCM 9.1 HR Self Service online and PeopleSoft Payroll batch workloads to run on a unified database deployed on Oracle Database 11g Release 2. The PeopleSoft HRSS benchmark kit is a Oracle standard benchmark kit run by all platform vendors to measure the performance. It's an OLTP benchmark where DB SQLs are moderately complex. The results are certified by Oracle and a white paper is published. PeopleSoft HR SS defines a business transaction as a series of HTML pages that guide a user through a particular scenario. Users are defined as corporate Employees, Managers and HR administrators. The benchmark consist of 14 scenarios which emulate users performing typical HCM transactions such as viewing paycheck, promoting and hiring employees, updating employee profile and other typical HCM application transactions. All these transactions are well-defined in the PeopleSoft HR Self-Service 9.1 benchmark kit. This benchmark metric is the weighted average response search/save time for all the transactions. The PeopleSoft 9.1 Payroll (North America) benchmark demonstrates system performance for a range of processing volumes in a specific configuration. This workload represents large batch runs typical of a ERP environment during a mass update. The benchmark measures five application business process run times for a database representing large organization. They are Paysheet Creation, Payroll Calculation, Payroll Confirmation, Print Advice forms, and Create Direct Deposit File. The benchmark metric is the cumulative elapsed time taken to complete the Paysheet Creation, Payroll Calculation and Payroll Confirmation business application processes. The benchmark metrics are taken for each respective benchmark while running simultaneously on the same database back-end. Specifically, the payroll batch processes are started when the online workload reaches steady state (the maximum number of online users) and overlap with online transactions for the duration of the steady state. Key Points and Best Practices Two PeopleSoft Domain sets with 200 application servers each on a SPARC T4-4 server were hosted in 2 separate Oracle Solaris Zones to demonstrate consolidation of multiple application servers, ease of administration and performance tuning. Each Oracle Solaris Zone was bound to a separate processor set, each containing 15 cores (total 120 threads). The default set (1 core from first and third processor socket, total 16 threads) was used for network and disk interrupt handling. This was done to improve performance by reducing memory access latency by using the physical memory closest to the processors and offload I/O interrupt handling to default set threads, freeing up cpu resources for Application Servers threads and balancing application workload across 240 threads. A total of 128 PeopleSoft streams server processes where used on the database node to complete payroll batch job of 500,000 employees in 32.4 minutes. See Also Oracle PeopleSoft Benchmark White Papers oracle.com SPARC T4-2 Server oracle.com OTN SPARC T4-4 Server oracle.com OTN PeopleSoft Enterprise Human Capital Managementoracle.com OTN PeopleSoft Enterprise Human Capital Management (Payroll) oracle.com OTN Oracle Solaris oracle.com OTN Oracle Database 11g Release 2 oracle.com OTN Disclosure Statement Copyright 2012, Oracle and/or its affiliates. All rights reserved. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Results as of 8 November 2012.

    Read the article

  • Curing the Database-Application mismatch

    - by Phil Factor
    If an application requires access to a database, then you have to be able to deploy it so as to be version-compatible with the database, in phase. If you can deploy both together, then the application and database must normally be deployed at the same version in which they, together, passed integration and functional testing.  When a single database supports more than one application, then the problem gets more interesting. I’ll need to be more precise here. It is actually the application-interface definition of the database that needs to be in a compatible ‘version’.  Most databases that get into production have no separate application-interface; in other words they are ‘close-coupled’.  For this vast majority, the whole database is the application-interface, and applications are free to wander through the bowels of the database scot-free.  If you’ve spurned the perceived wisdom of application architects to have a defined application-interface within the database that is based on views and stored procedures, any version-mismatch will be as sensitive as a kitten.  A team that creates an application that makes direct access to base tables in a database will have to put a lot of energy into keeping Database and Application in sync, to say nothing of having to tackle issues such as security and audit. It is not the obvious route to development nirvana. I’ve been in countless tense meetings with application developers who initially bridle instinctively at the apparent restrictions of being ‘banned’ from the base tables or routines of a database.  There is no good technical reason for needing that sort of access that I’ve ever come across.  Everything that the application wants can be delivered via a set of views and procedures, and with far less pain for all concerned: This is the application-interface.  If more than zero developers are creating a database-driven application, then the project will benefit from the loose-coupling that an application interface brings. What is important here is that the database development role is separated from the application development role, even if it is the same developer performing both roles. The idea of an application-interface with a database is as old as I can remember. The big corporate or government databases generally supported several applications, and there was little option. When a new application wanted access to an existing corporate database, the developers, and myself as technical architect, would have to meet with hatchet-faced DBAs and production staff to work out an interface. Sure, they would talk up the effort involved for budgetary reasons, but it was routine work, because it decoupled the database from its supporting applications. We’d be given our own stored procedures. One of them, I still remember, had ninety-two parameters. All database access was encapsulated in one application-module. If you have a stable defined application-interface with the database (Yes, one for each application usually) you need to keep the external definitions of the components of this interface in version control, linked with the application source,  and carefully track and negotiate any changes between database developers and application developers.  Essentially, the application development team owns the interface definition, and the onus is on the Database developers to implement it and maintain it, in conformance.  Internally, the database can then make all sorts of changes and refactoring, as long as source control is maintained.  If the application interface passes all the comprehensive integration and functional tests for the particular version they were designed for, nothing is broken. Your performance-testing can ‘hang’ on the same interface, since databases are judged on the performance of the application, not an ‘internal’ database process. The database developers have responsibility for maintaining the application-interface, but not its definition,  as they refactor the database. This is easily tested on a daily basis since the tests are normally automated. In this setting, the deployment can proceed if the more stable application-interface, rather than the continuously-changing database, passes all tests for the version of the application. Normally, if all goes well, a database with a well-designed application interface can evolve gracefully without changing the external appearance of the interface, and this is confirmed by integration tests that check the interface, and which hopefully don’t need to be altered at all often.  If the application is rapidly changing its ‘domain model’  in the light of an increased understanding of the application domain, then it can change the interface definitions and the database developers need only implement the interface rather than refactor the underlying database.  The test team will also have to redo the functional and integration tests which are, of course ‘written to’ the definition.  The Database developers will find it easier if these tests are done before their re-wiring  job to implement the new interface. If, at the other extreme, an application receives no further development work but survives unchanged, the database can continue to change and develop to keep pace with the requirements of the other applications it supports, and needs only to take care that the application interface is never broken. Testing is easy since your automated scripts to test the interface do not need to change. The database developers will, of course, maintain their own source control for the database, and will be likely to maintain versions for all major releases. However, this will not need to be shared with the applications that the database servers. On the other hand, the definition of the application interfaces should be within the application source. Changes in it have to be subject to change-control procedures, as they will require a chain of tests. Once you allow, instead of an application-interface, an intimate relationship between application and database, we are in the realms of impedance mismatch, over and above the obvious security problems.  Part of this impedance problem is a difference in development practices. Whereas the application has to be regularly built and integrated, this isn’t necessarily the case with the database.  An RDBMS is inherently multi-user and self-integrating. If the developers work together on the database, then a subsequent integration of the database on a staging server doesn’t often bring nasty surprises. A separate database-integration process is only needed if the database is deliberately built in a way that mimics the application development process, but which hampers the normal database-development techniques.  This process is like demanding a official walking with a red flag in front of a motor car.  In order to closely coordinate databases with applications, entire databases have to be ‘versioned’, so that an application version can be matched with a database version to produce a working build without errors.  There is no natural process to ‘version’ databases.  Each development project will have to define a system for maintaining the version level. A curious paradox occurs in development when there is no formal application-interface. When the strains and cracks happen, the extra meetings, bureaucracy, and activity required to maintain accurate deployments looks to IT management like work. They see activity, and it looks good. Work means progress.  Management then smile on the design choices made. In IT, good design work doesn’t necessarily look good, and vice versa.

    Read the article

  • CodePlex Daily Summary for Sunday, August 03, 2014

    CodePlex Daily Summary for Sunday, August 03, 2014Popular ReleasesBoxStarter: Boxstarter 2.4.76: Running the Setup.bat file will install Chocolatey if not present and then install the Boxstarter modules.GMare: GMare Beta 1.2: Features Added: - Instance painting by holding the alt key down while pressing the left mouse button - Functionality to the binary exporter so that backgrounds from image files can be used - On the binary exporter background information can be edited manually now - Update to the GMare binary read GML script - Game Maker Studio export - Import from GMare project. Multiple options to import desired properties of a .gmpx - 10 undo/redo levels instead of 5 is now the default - New preferences dia...Json.NET: Json.NET 6.0 Release 4: New feature - Added Merge to LINQ to JSON New feature - Added JValue.CreateNull and JValue.CreateUndefined New feature - Added Windows Phone 8.1 support to .NET 4.0 portable assembly New feature - Added OverrideCreator to JsonObjectContract New feature - Added support for overriding the creation of interfaces and abstract types New feature - Added support for reading UUID BSON binary values as a Guid New feature - Added MetadataPropertyHandling.Ignore New feature - Improv...SQL Server Dialog: SQL Server Dialog: Input server, user and password Show folder and file in treeview Customize icon Filter file extension Skip system generate folder and fileAitso-a platform for spatial optimization and based on artificial immune systems: Aitso_0.14.08.01: Aitso0.14.08.01Installer.zipVidCoder: 1.5.24 Beta: Added NL-Means denoiser. Updated HandBrake core to SVN 6254. Added extra error handling to DVD player code to avoid a crash when the player was moved.AutoUpdater.NET : Auto update library for VB.NET and C# Developer: AutoUpdater.NET 1.3: Fixed problem in DownloadUpdateDialog where download continues even if you close the dialog. Added support for new url field for 64 bit application setup. AutoUpdater.NET will decide which download url to use by looking at the value of IntPtr.Size. Added German translation provided by Rene Kannegiesser. Now developer can handle update logic herself using event suggested by ricorx7. Added italian translation provided by Gianluca Mariani. Fixed bug that prevents Application from exiti...SEToolbox: SEToolbox 01.041.012 Release 1: Added voxel material textures to read in with mods. Fixed missing texture replacements for mods. Fixed rounding issue in raytrace code. Fixed repair issue with corrupt checkpoint file. Fixed issue with updated SE binaries 01.041.012 using new container configuration.Magick.NET: Magick.NET 6.8.9.601: Magick.NET linked with ImageMagick 6.8.9.6 Breaking changes: - Changed arguments for the Map method of MagickImage. - QuantizeSettings uses Riemersma by default.Multiple Threads TCP Server: Project: this Project is based on VS 2013, .net freamwork 4.0, you can open it by vs 2010 or laterAricie Shared: Aricie.Shared Version 1.8.00: Version 1.8.0 - Release Notes New: Expression Builder to design Flee Expressions New: Cryptographic helpers and configuration classes Improvement: Many fixes and improvements with property editor Improvement: Token Replace Property explorer now has a restricted mode for additional security Improvement: Better variables, types and object manipulation Fixed: smart file and flee bugs Fixed: Removed Exception while trying to read unsuported files Improvement: several performance twe...Accesorios de sitios Torrent en Español para Synology Download Station: Pack de Torrents en Español 6.0.0: Agregado los módulos de DivXTotal, el módulo de búsqueda depende del de alojamiento para bajar las series Utiliza el rss: http://www.divxtotal.com/rss.php DbEntry.Net (Leafing Framework): DbEntry.Net 4.2: DbEntry.Net is a lightweight Object Relational Mapping (ORM) database access compnent for .Net 4.0+. It has clearly and easily programing interface for ORM and sql directly, and supoorted Access, Sql Server, MySql, SQLite, Firebird, PostgreSQL and Oracle. It also provide a Ruby On Rails style MVC framework. Asp.Net DataSource and a simple IoC. DbEntry.Net.v4.2.Setup.zip include the setup package. DbEntry.Net.v4.2.Src.zip include source files and unit tests. DbEntry.Net.v4.2.Samples.zip ...Azure Storage Explorer: Azure Storage Explorer 6 Preview 1: Welcome to Azure Storage Explorer 6 Preview 1 This is the first release of the latest Azure Storage Explorer, code-named Phoenix. What's New?Here are some important things to know about version 6: Open Source Now being run as a full open source project. Full source code on CodePlex. Collaboration encouraged! Updated Code Base Brand-new code base (WPF/C#/.NET 4.5) Visual Studio 2013 solution (previously VS2010) Uses the Task Parallel Library (TPL) for asynchronous background operat...Wsus Package Publisher: release v1.3.1407.29: Updated WPP to recognize the very latest console version. Some files was missing into the latest release of WPP which lead to crash when trying to make a custom update. Add a workaround to avoid clipboard modification when double-clicking on a label when creating a custom update. Add the ability to publish detectoids. (This feature is still in a BETA phase. Packages relying on these detectoids to determine which computers need to be updated, may apply to all computers).VG-Ripper & PG-Ripper: PG-Ripper 1.4.32: changes NEW: Added Support for 'ImgMega.com' links NEW: Added Support for 'ImgCandy.net' links NEW: Added Support for 'ImgPit.com' links NEW: Added Support for 'Img.yt' links FIXED: 'Radikal.ru' links FIXED: 'ImageTeam.org' links FIXED: 'ImgSee.com' links FIXED: 'Img.yt' linksAsp.Net MVC-4,Entity Framework and JQGrid Demo with Todo List WebApplication: Asp.Net MVC-4,Entity Framework and JQGrid Demo: Asp.Net MVC-4,Entity Framework and JQGrid Demo with simple Todo List WebApplication, Overview TodoList is a simple web application to create, store and modify Todo tasks to be maintained by the users, which comprises of following fields to the user (Task Name, Task Description, Severity, Target Date, Task Status). TodoList web application is created using MVC - 4 architecture, code-first Entity Framework (ORM) and Jqgrid for displaying the data.Waterfox: Waterfox 31.0 Portable: New features in Waterfox 31.0: Added support for Unicode 7.0 Experimental support for WebCL New features in Firefox 31.0:New Add the search field to the new tab page Support of Prefer:Safe http header for parental control mozilla::pkix as default certificate verifier Block malware from downloaded files Block malware from downloaded files audio/video .ogg and .pdf files handled by Firefox if no application specified Changed Removal of the CAPS infrastructure for specifying site-sp...SuperSocket, an extensible socket server framework: SuperSocket 1.6.3: The changes below are included in this release: fixed an exception when collect a server's status but it has been stopped fixed a bug that can cause an exception in case of sending data when the connection dropped already fixed the log4net missing issue for a QuickStart project fixed a warning in a QuickStart projectYnote Classic: Ynote Classic 2.8.5 Beta: Several Changes - Multiple Carets and Multiple Selections - Improved Startup Time - Improved Syntax Highlighting - Search Improvements - Shell Command - Improved StabilityNew ProjectsCreek: Creek is a Collection of many C# Frameworks and my ownSpeaking Speedometer (android): Simple speaking speedometerT125Protocol { Alpha version }: implement T125 Protocol for communicate with a mainframe.Unix Time: This library provides a System.UnixTime as a new Type providing conversion between Unix Time and .NET DateTime.

    Read the article

  • Online ALTER TABLE in MySQL 5.6

    - by Marko Mäkelä
    This is the low-level view of data dictionary language (DDL) operations in the InnoDB storage engine in MySQL 5.6. John Russell gave a more high-level view in his blog post April 2012 Labs Release – Online DDL Improvements. MySQL before the InnoDB Plugin Traditionally, the MySQL storage engine interface has taken a minimalistic approach to data definition language. The only natively supported operations were CREATE TABLE, DROP TABLE and RENAME TABLE. Consider the following example: CREATE TABLE t(a INT); INSERT INTO t VALUES (1),(2),(3); CREATE INDEX a ON t(a); DROP TABLE t; The CREATE INDEX statement would be executed roughly as follows: CREATE TABLE temp(a INT, INDEX(a)); INSERT INTO temp SELECT * FROM t; RENAME TABLE t TO temp2; RENAME TABLE temp TO t; DROP TABLE temp2; You could imagine that the database could crash when copying all rows from the original table to the new one. For example, it could run out of file space. Then, on restart, InnoDB would roll back the huge INSERT transaction. To fix things a little, a hack was added to ha_innobase::write_row for committing the transaction every 10,000 rows. Still, it was frustrating that even a simple DROP INDEX would make the table unavailable for modifications for a long time. Fast Index Creation in the InnoDB Plugin of MySQL 5.1 MySQL 5.1 introduced a new interface for CREATE INDEX and DROP INDEX. The old table-copying approach can still be forced by SET old_alter_table=0. This interface is used in MySQL 5.5 and in the InnoDB Plugin for MySQL 5.1. Apart from the ability to do a quick DROP INDEX, the main advantage is that InnoDB will execute a merge-sort algorithm before inserting the index records into each index that is being created. This should speed up the insert into the secondary index B-trees and potentially result in a better B-tree fill factor. The 5.1 ALTER TABLE interface was not perfect. For example, DROP FOREIGN KEY still invoked the table copy. Renaming columns could conflict with InnoDB foreign key constraints. Combining ADD KEY and DROP KEY in ALTER TABLE was problematic and not atomic inside the storage engine. The ALTER TABLE interface in MySQL 5.6 The ALTER TABLE storage engine interface was completely rewritten in MySQL 5.6. Instead of introducing a method call for every conceivable operation, MySQL 5.6 introduced a handful of methods, and data structures that keep track of the requested changes. In MySQL 5.6, online ALTER TABLE operation can be requested by specifying LOCK=NONE. Also LOCK=SHARED and LOCK=EXCLUSIVE are available. The old-style table copying can be requested by ALGORITHM=COPY. That one will require at least LOCK=SHARED. From the InnoDB point of view, anything that is possible with LOCK=EXCLUSIVE is also possible with LOCK=SHARED. Most ALGORITHM=INPLACE operations inside InnoDB can be executed online (LOCK=NONE). InnoDB will always require an exclusive table lock in two phases of the operation. The execution phases are tied to a number of methods: handler::check_if_supported_inplace_alter Checks if the storage engine can perform all requested operations, and if so, what kind of locking is needed. handler::prepare_inplace_alter_table InnoDB uses this method to set up the data dictionary cache for upcoming CREATE INDEX operation. We need stubs for the new indexes, so that we can keep track of changes to the table during online index creation. Also, crash recovery would drop any indexes that were incomplete at the time of the crash. handler::inplace_alter_table In InnoDB, this method is used for creating secondary indexes or for rebuilding the table. This is the ‘main’ phase that can be executed online (with concurrent writes to the table). handler::commit_inplace_alter_table This is where the operation is committed or rolled back. Here, InnoDB would drop any indexes, rename any columns, drop or add foreign keys, and finalize a table rebuild or index creation. It would also discard any logs that were set up for online index creation or table rebuild. The prepare and commit phases require an exclusive lock, blocking all access to the table. If MySQL times out while upgrading the table meta-data lock for the commit phase, it will roll back the ALTER TABLE operation. In MySQL 5.6, data definition language operations are still not fully atomic, because the data dictionary is split. Part of it is inside InnoDB data dictionary tables. Part of the information is only available in the *.frm file, which is not covered by any crash recovery log. But, there is a single commit phase inside the storage engine. Online Secondary Index Creation It may occur that an index needs to be created on a new column to speed up queries. But, it may be unacceptable to block modifications on the table while creating the index. It turns out that it is conceptually not so hard to support online index creation. All we need is some more execution phases: Set up a stub for the index, for logging changes. Scan the table for index records. Sort the index records. Bulk load the index records. Apply the logged changes. Replace the stub with the actual index. Threads that modify the table will log the operations to the logs of each index that is being created. Errors, such as log overflow or uniqueness violations, will only be flagged by the ALTER TABLE thread. The log is conceptually similar to the InnoDB change buffer. The bulk load of index records will bypass record locking. We still generate redo log for writing the index pages. It would suffice to log page allocations only, and to flush the index pages from the buffer pool to the file system upon completion. Native ALTER TABLE Starting with MySQL 5.6, InnoDB supports most ALTER TABLE operations natively. The notable exceptions are changes to the column type, ADD FOREIGN KEY except when foreign_key_checks=0, and changes to tables that contain FULLTEXT indexes. The keyword ALGORITHM=INPLACE is somewhat misleading, because certain operations cannot be performed in-place. For example, changing the ROW_FORMAT of a table requires a rebuild. Online operation (LOCK=NONE) is not allowed in the following cases: when adding an AUTO_INCREMENT column, when the table contains FULLTEXT indexes or a hidden FTS_DOC_ID column, or when there are FOREIGN KEY constraints referring to the table, with ON…CASCADE or ON…SET NULL option. The FOREIGN KEY limitations are needed, because MySQL does not acquire meta-data locks on the child or parent tables when executing SQL statements. Theoretically, InnoDB could support operations like ADD COLUMN and DROP COLUMN in-place, by lazily converting the table to a newer format. This would require that the data dictionary keep multiple versions of the table definition. For simplicity, we will copy the entire table, even for DROP COLUMN. The bulk copying of the table will bypass record locking and undo logging. For facilitating online operation, a temporary log will be associated with the clustered index of table. Threads that modify the table will also write the changes to the log. When altering the table, we skip all records that have been marked for deletion. In this way, we can simply discard any undo log records that were not yet purged from the original table. Off-page columns, or BLOBs, are an important consideration. We suspend the purge of delete-marked records if it would free any off-page columns from the old table. This is because the BLOBs can be needed when applying changes from the log. We have special logging for handling the ROLLBACK of an INSERT that inserted new off-page columns. This is because the columns will be freed at rollback.

    Read the article

  • Resolving data redundancy up front

    - by okeofs
    Introduction As all of us do when confronted with a problem, the resource of choice is to ‘Google it’. This is where the plot thickens. Recently I was asked to stage data from numerous databases which were to be loaded into a data warehouse. To make a long story short, I was looking for a manner in which to obtain the table names from each database, to ascertain potential overlap.   As the source data comes from a SQL database created from dumps of a third party product,  one could say that there were +/- 95 tables for each database.   Yes I know that first instinct is to use the system stored procedure “exec sp_msforeachdb 'select "?" AS db, * from [?].sys.tables'”. However, if one stops to think about this, it would be nice to have all the results in a temporary or disc based  table; which in itself , implies additional labour. This said,  I decided to ‘re-invent’ the wheel. The full code sample may be found at the bottom of this article.   Define a few temporary tables and variables   declare @SQL varchar(max); declare @databasename varchar(75) /* drop table ##rawdata3 drop table #rawdata1 drop table #rawdata11 */ -- A temp table to hold the names of my databases CREATE TABLE #rawdata1 (    database_name varchar(50) ,    database_size varchar(50),    remarks Varchar(50) )     --A temp table with the same database names as above, HOWEVER using an --Identity number (recNO) as a loop variable. --You will note below that I loop through until I reach 25 (see below) as at --that point the system databases, the reporting server database etc begin. --1- 24 are user databases. These are really what I was looking for. --Whilst NOT the best solution,it works and the code was meant as a quick --and dirty. CREATE TABLE #rawdata11 (    recNo int identity(1,1),    database_name varchar(50) ,    database_size varchar(50),    remarks Varchar(50) )   --My output table showing the database name and table name CREATE TABLE ##rawdata3 (    database_name varchar(75) ,    table_name varchar(75), )   Insert the database names into a temporary table I pull the database names using the system stored procedure sp_databases   INSERT INTO #rawdata1 EXEC sp_databases Go   Insert the results from #rawdata1 into a table containing a record number  #rawdata11 so that I can LOOP through the extract   INSERT into #rawdata11 select * from  #rawdata1   We now declare 3 more variables:  @kounter is used to keep track of our position within the loop. @databasename is used to keep track of the’ current ‘ database name being used in the current pass of the loop;  as inorder to obtain the tables for that database we  need to issue a ‘USE’ statement, an insert command and other related code parts. This is the challenging part. @sql is a varchar(max) variable used to contain the ‘USE’ statement PLUS the’ insert ‘ code statements. We now initalize @kounter to 1 .   declare @kounter int; declare @databasename varchar(75); declare @sql varchar(max); set @kounter = 1   The Loop The astute reader will remember that the temporary table #rawdata11 contains our  database names  and each ‘database row’ has a record number (recNo). I am only interested in record numbers under 25. I now set the value of the temporary variable @DatabaseName (see below) .Note that I used the row number as a part of the predicate. Now, knowing the database name, I can create dynamic T-SQL to be executed using the sp_sqlexec stored procedure (see the code in red below). Finally, after all the tables for that given database have been placed in temporary table ##rawdata3, I increment the counter and continue on. Note that I used a global temporary table to ensure that the result set persists after the termination of the run. At some stage, I plan to redo this part of the code, as global temporary tables are not really an ideal solution.    WHILE (@kounter < 25)  BEGIN  select @DatabaseName = database_name from #rawdata11 where recNo = @kounter  set @SQL = 'Use ' + @DatabaseName + ' Insert into ##rawdata3 ' + + ' SELECT table_catalog,Table_name FROM information_schema.tables' exec sp_sqlexec  @Sql  SET @kounter  = @kounter + 1  END   The full code extract   Here is the full code sample.   declare @SQL varchar(max); declare @databasename varchar(75) /* drop table ##rawdata3 drop table #rawdata1 drop table #rawdata11 */ CREATE TABLE #rawdata1 (    database_name varchar(50) ,    database_size varchar(50),    remarks Varchar(50) ) CREATE TABLE #rawdata11 (    recNo int identity(1,1),    database_name varchar(50) ,    database_size varchar(50),    remarks Varchar(50) ) CREATE TABLE ##rawdata3 (    database_name varchar(75) ,    table_name varchar(75), )   INSERT INTO #rawdata1 EXEC sp_databases go INSERT into #rawdata11 select * from  #rawdata1 declare @kounter int; declare @databasename varchar(75); declare @sql varchar(max); set @kounter = 1 WHILE (@kounter < 25)  BEGIN  select @databasename = database_name from #rawdata11 where recNo = @kounter  set @SQL = 'Use ' + @DatabaseName + ' Insert into ##rawdata3 ' + + ' SELECT table_catalog,Table_name FROM information_schema.tables' exec sp_sqlexec  @Sql  SET @kounter  = @kounter + 1  END    select * from ##rawdata3  where table_name like '%SalesOrderHeader%'

    Read the article

  • Data Guard - Snapshot Standby Database??

    - by Jian Zhang-Oracle
    ?? -------- ?????,??standby?????mount??????????REDO??,??standby????????????????????,???????read-only???open????,????ACTIVE DATA GUARD,????standby?????????(read-only)??(????????),????standby???????????(read-write)? ?????,?????????????Real Application Testing(RAT)??????????,?????????standby??????snapshot standby?????????,??snapshot standby??????????,???????????(read-write)??????snapshot standby??????????????,?????????,??????????,????????,?????????snapshot standby?????standby???,????????? ?? ---------  1.??standby?????? SQL> Alter system set db_recovery_file_dest_size=500M; System altered. SQL> Alter system set db_recovery_file_dest='/u01/app/oracle/snapshot_standby'; System altered. 2.??standby?????? SQL> alter database recover managed standby database cancel; Database altered. 3.??standby???snapshot standby,??open snapshot standby SQL> alter database convert to snapshot standby; Database altered. SQL> alter database open;    Database altered. ??snapshot standby??????SNAPSHOT STANDBY,open???READ WRITE: SQL> select DATABASE_ROLE,name,OPEN_MODE from v$database; DATABASE_ROLE    NAME      OPEN_MODE ---------------- --------- -------------------- SNAPSHOT STANDBY FSDB      READ WRITE 4.?snapshot standby???????????Real Application Testing(RAT)????????? 5.?????,??snapshot standby???physical standby,?????????? SQL> shutdown immediate; Database closed. Database dismounted. ORACLE instance shut down. SQL> startup mount; ORACLE instance started. Database mounted. SQL> ALTER DATABASE CONVERT TO PHYSICAL STANDBY; Database altered. SQL> shutdown immediate; ORA-01507: database not mounted ORACLE instance shut down. SQL> startup mount; ORACLE instance started. Database mounted. SQL>ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION; Database altered. 5.?????standby?,???????PHYSICAL STANDBY,open???MOUNTED SQL> select DATABASE_ROLE,name,OPEN_MODE from v$database; DATABASE_ROLE    NAME      OPEN_MODE ---------------- --------- -------------------- PHYSICAL STANDBY FSDB      MOUNTED 6.??????????????? ????: SQL> select ads.dest_id,max(sequence#) "Current Sequence",            max(log_sequence) "Last Archived"        from v$archived_log al, v$archive_dest ad, v$archive_dest_status ads        where ad.dest_id=al.dest_id        and al.dest_id=ads.dest_id        and al.resetlogs_change#=(select max(resetlogs_change#) from v$archived_log )        group by ads.dest_id;    DEST_ID Current Sequence Last Archived ---------- ---------------- -------------      1              361           361      2              361           362 --???? SQL>    select al.thrd "Thread", almax "Last Seq Received", lhmax "Last Seq Applied"       from (select thread# thrd, max(sequence#) almax           from v$archived_log           where resetlogs_change#=(select resetlogs_change# from v$database)           group by thread#) al,          (select thread# thrd, max(sequence#) lhmax           from v$log_history           where resetlogs_change#=(select resetlogs_change# from v$database)           group by thread#) lh      where al.thrd = lh.thrd;     Thread Last Seq Received Last Seq Applied ---------- ----------------- ----------------          1               361              361 ??????????,???blog,???????????,??"??:Data Guard - Snapshot Standby Database??" 

    Read the article

  • Mount TMPFS instead of ro /dev

    - by schiggn
    I am working on a ARM-Based embedded system with a custom Debian Linux based on kernel 2.6.31. In the final system, the Root file system is stored as squashfs on flash. Now, the folder /dev is created by udev, but since there is no hot plugging functionality needed and booting time is critical, I wanted to delete udev and "hard code" the /dev folder (read here, page 5). because i still need to change parameters of the devices (with ioctl /sysfs) this does not work for me in this case. so i thought of mounting a tmpfs on /dev and change the parameters there. is this possible? and how to do best? my approach would be: delete /dev from RFS create tar containing basic devices mount tmpfs /dev untar tar-file into /dev change parameters Could this work? Do you see any problems? I found out, that you can mount on top of already mounted mount point, is it somehow possible just to take data with while mounting the new file system? if so that would be very convenient! Thanks Update: I just tried that out, but I'm stuck at a certain point. I packed all my devices into devices.tar, packed it into /usr of my squashfs and added the following lines to mountkernfs.sh, which is executed right after INIT. #mount /dev on tmpfs echo -n "Mounting /dev on tmpfs..." mount -o size=5M,mode=0755 -t tmpfs tmpfs /dev mknod -m 600 /dev/console c 5 1 mknod -m 600 /dev/null c 1 3 echo "done." echo -n "Populating /dev..." tar -xf /usr/devices.tar -C /dev echo "done." This works fine on the version over NFS, if I place printf's in the code, I can see it executing, if I comment out the extracting part, its complaining about missing devices. Booting OK mmc0: new high speed SDHC card at address 0007 mmcblk0: mmc0:0007 SD04G 3.67 GiB mmcblk0: p1 IP-Config: Unable to set interface netmask (-22). Looking up port of RPC 100003/2 on 192.168.1.234 Looking up port of RPC 100005/1 on 192.168.1.234 VFS: Mounted root (nfs filesystem) on device 0:14. Freeing init memory: 136K INIT: version 2.86 booting Mounting /dev on tmpfs...done. Populating /dev...done. Initializing /var...done. Setting the system clock. System Clock set to: Thu Sep 13 11:26:23 UTC 2012. INIT: Entering runlevel: 2 UBI: attaching mtd8 to ubi0 Commenting out the extraction of the tar mmc0: new high speed SDHC card at address 0007 mmcblk0: mmc0:0007 SD04G 3.67 GiB mmcblk0: p1 IP-Config: Unable to set interface netmask (-22). Looking up port of RPC 100003/2 on 192.168.1.234 Looking up port of RPC 100005/1 on 192.168.1.234 VFS: Mounted root (nfs filesystem) on device 0:14. Freeing init memory: 136K INIT: version 2.86 booting Mounting /dev on tmpfs...done. Populating /dev...done. Initializing /var...done. Setting the system clock. Cannot access the Hardware Clock via any known method. Use the --debug option to see the details of our search for an access method. Unable to set System Clock to: Thu Sep 13 12:24:00 UTC 2012 ... (warning). INIT: Entering runlevel: 2 libubi: error!: cannot open "/dev/ubi_ctrl" So far so good. But if I pack the whole story into a squashfs and boot from there, it is acting strange. It's telling me while booting that it is unable to open an initial console and its throwing errors on mounting the UBIFS devices, but finally provides a login anyway. Over that my echo's are not executed. If I then log in, /dev is mounted as TMPFS as desired and all the devices reside inside. When I redo the "mount" command to mount the UBIFS partitions it is executed whitout problem and useable. From squashfs VFS: Mounted root (squashfs filesystem) readonly on device 31:15. Freeing init memory: 136K Warning: unable to open an initial console. mmc0: new high speed SDHC card at address 0007 mmcblk0: mmc0:0007 SD04G 3.67 GiB mmcblk0: p1 UBIFS error (pid 484): ubifs_get_sb: cannot open "ubi1_0", error -19 Additionally, a part of the rest of the bootscripts is still exexuted, but not all of them. Does anyone has a clue why? Other question, is 5MB enough/too much for /dev?

    Read the article

  • FOSUserBundle override mapping to remove need for username

    - by musoNic80
    I want to remove the need for a username in the FOSUserBundle. My users will login using an email address only and I've added real name fields as part of the user entity. I realised that I needed to redo the entire mapping as described here. I think I've done it correctly but when I try to submit the registration form I get the error: "Only field names mapped by Doctrine can be validated for uniqueness." The strange thing is that I haven't tried to assert a unique constraint to anything in the user entity. Here is my full user entity file: <?php // src/MyApp/UserBundle/Entity/User.php namespace MyApp\UserBundle\Entity; use FOS\UserBundle\Model\User as BaseUser; use Doctrine\ORM\Mapping as ORM; use Symfony\Component\Validator\Constraints as Assert; /** * @ORM\Entity * @ORM\Table(name="depbook_user") */ class User extends BaseUser { /** * @ORM\Id * @ORM\Column(type="integer") * @ORM\GeneratedValue(strategy="AUTO") */ protected $id; /** * @ORM\Column(type="string", length=255) * * @Assert\NotBlank(message="Please enter your first name.", groups={"Registration", "Profile"}) * @Assert\MaxLength(limit="255", message="The name is too long.", groups={"Registration", "Profile"}) */ protected $firstName; /** * @ORM\Column(type="string", length=255) * * @Assert\NotBlank(message="Please enter your last name.", groups={"Registration", "Profile"}) * @Assert\MaxLength(limit="255", message="The name is too long.", groups={"Registration", "Profile"}) */ protected $lastName; /** * @ORM\Column(type="string", length=255) * * @Assert\NotBlank(message="Please enter your email address.", groups={"Registration", "Profile"}) * @Assert\MaxLength(limit="255", message="The name is too long.", groups={"Registration", "Profile"}) * @Assert\Email(groups={"Registration"}) */ protected $email; /** * @ORM\Column(type="string", length=255, name="email_canonical", unique=true) */ protected $emailCanonical; /** * @ORM\Column(type="boolean") */ protected $enabled; /** * @ORM\Column(type="string") */ protected $salt; /** * @ORM\Column(type="string") */ protected $password; /** * @ORM\Column(type="datetime", nullable=true, name="last_login") */ protected $lastLogin; /** * @ORM\Column(type="boolean") */ protected $locked; /** * @ORM\Column(type="boolean") */ protected $expired; /** * @ORM\Column(type="datetime", nullable=true, name="expires_at") */ protected $expiresAt; /** * @ORM\Column(type="string", nullable=true, name="confirmation_token") */ protected $confirmationToken; /** * @ORM\Column(type="datetime", nullable=true, name="password_requested_at") */ protected $passwordRequestedAt; /** * @ORM\Column(type="array") */ protected $roles; /** * @ORM\Column(type="boolean", name="credentials_expired") */ protected $credentialsExpired; /** * @ORM\Column(type="datetime", nullable=true, name="credentials_expired_at") */ protected $credentialsExpiredAt; public function __construct() { parent::__construct(); // your own logic } /** * @return string */ public function getFirstName() { return $this->firstName; } /** * @return string */ public function getLastName() { return $this->lastName; } /** * Sets the first name. * * @param string $firstname * * @return User */ public function setFirstName($firstname) { $this->firstName = $firstname; return $this; } /** * Sets the last name. * * @param string $lastname * * @return User */ public function setLastName($lastname) { $this->lastName = $lastname; return $this; } } I've seen various suggestions about this but none of the suggestions seem to work for me. The FOSUserBundle docs are very sparse about what must be a very common request.

    Read the article

  • Xcode/iPhone Development 6 months in - Annoyances

    - by clearbrian
    Hi I've been iPhone programming for 6 months and come from a PC/Java/Eclipse background and still have a few annoyances with Xcode/iPhone programming I wonder are there any shortcuts to. Is there any way to prevent multiple windows opening all the time in XCode? a) When you click on the Errors/Warnings in the bottom right of the status bar build errors are shown in separate window. Any way to get these to show in the main editor? b) Anyway to get debugger to appear in main editor. I have a big screen iMac and it's still window hell on Macs. When you come from Alt-Tab the Mac is a nightmare. 2) Anyway to get a toolbar item on the main editor to: a) Open Console (I know CMD-thingy-R) b) Open Break points (you have to open Debugger first then breakpoints) I know there's keyboard shortcuts but I have only left hand free others on the trackball so any keys on right hand side of keyboard are too far. I know you can add Finder toolbar scripts (just wondering if anyway to extend Xcode). Are there utilities to extend Xcode? Scripts/Automator/Any Services I can setup to help. Can you automate Xcode like you can with Windows/ActiveX/VBA 3) Limit lookups using CMD + double click. If I double click on a variable to find its definition using CMD + double click it shows every occurrence of all variables with that name. (annoying it you name all you maps mapView) Anyway to get it to limit to the current class or at least order so current class is first. 4) Find doesn't seem to loop backwards if result all above cursor position I'm in a class and I hit CMD + F for find. Find box appears. I enter some text hit return. It says I have x matches but only back arrow is highlight in Find But when I hit < it does nothing. I need to scroll to the top and redo the search. If the text is both forwards and backwards then both < are highlighted and it works. is this a bug or a 'feature' Missing Eclipse features I have been looking at the User Script menu but was wondering how powerful they are? 5) any scripts around to generate source from members such as description: @property @synthesize if I add a new member, run a script will generate @property/@syntesize and release in dealloc 7) any good sites for scripts? SCM Im having problems with SCM and Folders on HD under project Classes directory. You get a library e.g. JSON. It usually comes as a folder. You copy it to the /Classes for your project. /Classes/JSON I create a Group for the Library in Xcode under Classes group. Classes JSON I drag the files from the folder into xcode into the JSON Group. I add them to the SCM and icon changes from ? to A but if I try and commit them it say folder /JSON is not under SCM. Can you drag a folder into Xcode so that it AND its files get included in SCM? Anyway to stop Xcode Help from being on top all the time. I keep feeling like punching it and telling it to get out of the way! :) I dont mind it open just not in the way once I've finished. Yes I know I can Ctrl-W Sites: the main site I use to learn Obj-C are : stackoverflow.com Google code Search - tonnes of full apps on here http://www.iphonedevsdk.com/forum/iphone-sdk-development/ Apple Developers Forums (anyway to get RSS feed to these or is that blasphemy :) ) Safari - 100s of IT book though prob too many to keep up :) any others? Any site that gives simple examples for Obj-C/ UIKit The docs just show the methods but actual examples (Google code search has helped a lot here)

    Read the article

  • Paperclip: "missing" image when uses has_one

    - by EricR
    I'm working on a website that allows people who run bed and breakfast businesses to post their accommodations. I would like to require that they include a "profile image" of the accommodation when they post it, but I also want to give them the option to add more images later (this will be developed after). I thought the best thing to do would be to use the Paperclip gem and have a Accommodation and a Photo in my application, the later belonging to the first as an association. A new Photo record is created when they create an Accommodation. It has both id and accommodation_id attributes. However, the image is never uploaded and none of the Paperclip attributes get set (image_file_name: nil, image_content_type: nil, image_file_size: nil), so I get Paperclip's "missing" photo. Any ideas on this one? It's been keeping me stuck for a few days now. Accommodation models/accommodation.rb class Accommodation < ActiveRecord::Base validates_presence_of :title, :description, :photo, :thing, :location attr_accessible :title, :description, :thing, :borough, :location, :spaces, :price has_one :photo end controllers/accommodation_controller.erb class AccommodationsController < ApplicationController before_filter :login_required, :only => {:new, :edit} uses_tiny_mce ( :options => { :theme => 'advanced', :theme_advanced_toolbar_location => 'top', :theme_advanced_toolbar_align => 'left', :theme_advanced_buttons1 => 'bold,italic,underline,bullist,numlist,separator,undo,redo', :theme_advanced_buttons2 => '', :theme_advanced_buttons3 => '' }) def index @accommodations = Accommodation.all end def show @accommodation = Accommodation.find(params[:id]) end def new @accommodation = Accommodation.new end def create @accommodation = Accommodation.new(params[:accommodation]) @accommodation.photo = Photo.new(params[:photo]) @accommodation.user_id = current_user.id if @accommodation.save flash[:notice] = "Successfully created your accommodation." render :action => 'show' else render :action => 'new' end end def edit @accommodation = Accommodation.find(params[:id]) end def update @accommodation = Accommodation.find(params[:id]) if @accommodation.update_attributes(params[:accommodation]) flash[:notice] = "Successfully updated accommodation." render :action => 'show' else render :action => 'edit' end end def destroy @accommodation = Accommodation.find(params[:id]) @accommodation.destroy flash[:notice] = "Successfully destroyed accommodation." redirect_to :inkeep end private def check_owner end end views/accommodations/_form.html.erb <%= form_for @accommodation, :html => {:multipart => true} do |f| %> <%= f.error_messages %> <p> Title<br /> <%= f.text_field :title, :size => 60 %> </p> <p> Description<br /> <%= f.text_area :description, :rows => 17, :cols => 75, :class => "mceEditor" %> </p> <p> Photo<br /> <%= f.file_field :photo %> </p> [... snip ...] <p><%= f.submit %></p> <% end %> Photo The controller and views are still the same as when Rails generated them. models/photo.erb class Photo < ActiveRecord::Base attr_accessible :image_file_name, :image_content_type, :image_file_size belongs_to :accommodation has_attached_file :image, :styles => { :thumb=> "100x100#", :small => "150x150>" } end

    Read the article

  • NEED your opinion on .net Profile class VS session vars

    - by Ted
    To save trips to sql db in my older apps, I store *dozens of data points about the current user in an array and then store the array in a session. For example, info that might be used repeatedly during user’s session might be stored… Dim a(7) as string a(0) = “FirstName” a(1) = “LastName” a(2) = “Address” a(3) = “Address2” a(4) = “City” a(5) = “State” a(6) = “Zip” session.add(“s_a”, a) *Some apps have an array 100 in size. That is something I learned in my asp classic days. Referencing the correct index can be laborsome and I find it difficult to go back and add another data point in the array grouped with like data. For example, suppose I need to add Middle Initial to the array as a design alteration. Unless I redo the whole index mapping, I have to stick Middle Initial in the next open slot, which might be in the 50s. NOW, I am considering doing something easier to reference each time (eliminating the need to know the index of the value wanted). So I am looking to do this… session.add(“Firstname”, “FirstName”) session.add(“Lastname”, “LastName”) session.add(“Address”, “Address”) etc. BUT, before I do this, I would like some guidance. I am afraid this might be less efficient, even though easier to use. I don’t know if a new session object is created for each data point or if there is only one session object, and I am adding a name/value pair to that object? If I am adding a name/value pair to a single object, that seems like a good idea. Does anyone know? Or is there a more preferred way? Built-in Profile class? Re: Profile class I have an internal debate about scope. It seems that the .net Profile class is good for storing app-SPECIFIC user settings (i.e. style theme, object display properties, user role, etc.) The examples I give are information whose values are selected/edited by the user to customize the application experience. This information is not typically stored/edited elsewhere in the app db. But when you have data that 1) is stored already in the app db and 2) can be altered by other users (in this case: company reps may update client's status, address, etc.), then the persistence of the Profile data may be an issue. In this case, the Profile would need to be reset at the beginning and dropped like a session.abandon at the end of each user's session to prevent reloading info that had since been edited by someone. I believe this is possible, but not sure Currently, I use the session array to store both scopes, app-specific and user-specific data. If my session plan is good, I think I will create a class to set/get values from the session also. I appreciate your thoughts. I would like to know how others have handled this type of situation. Thanks.

    Read the article

  • Ajax Control Toolkit July 2011 Release and the New HTML Editor Extender

    - by Stephen Walther
    I’m happy to announce the July 2011 release of the Ajax Control Toolkit which includes important bug fixes and a completely new HTML Editor Extender control. You can download the July 2011 Release by visiting the Ajax Control Toolkit CodePlex site at: http://AjaxControlToolkit.CodePlex.com Using the New HTML Editor Extender Control You can use the new HTML Editor Extender to extend any standard ASP.NET TextBox control so that it supports rich formatting such as bold, italics, bulleted lists, numbered lists, typefaces and different foreground and background colors. The following code illustrates how you can extend a standard ASP.NET TextBox control with the HtmlEditorExtender: <%@ Page Language="C#" AutoEventWireup="true" CodeBehind="Simple.aspx.cs" Inherits="WebApplication1.Simple" %> <%@ Register TagPrefix="asp" Namespace="AjaxControlToolkit" Assembly="AjaxControlToolkit" %> <html xmlns="http://www.w3.org/1999/xhtml"> <head runat="server"> <title>Simple</title> </head> <body> <form id="form1" runat="server"> <asp:ToolkitScriptManager runat="Server" /> <asp:TextBox ID="txtComments" TextMode="MultiLine" Columns="60" Rows="8" runat="server" /> <asp:HtmlEditorExtender TargetControlID="txtComments" runat="server" /> </form> </body> </html> This page has the following three controls: ToolkitScriptManager – The ToolkitScriptManager renders all of the scripts required by the Ajax Control Toolkit. TextBox – The TextBox control is a standard ASP.NET TextBox which is set to display multiple lines (a TextArea instead of an Input element). HtmlEditorExtender – The HtmlEditorExtender is set to extend the TextBox control. You can use the standard TextBox Text property to read the rich text entered into the TextBox control on the server. Lightweight and HTML5 The HTML Editor Extender works on all modern browsers including the most recent versions of Mozilla Firefox (Firefox 5), Google Chrome (Chrome 12), and Apple Safari (Safari 5). Furthermore, the HTML Editor Extender is compatible with Microsoft Internet Explorer 6 and newer. The HTML Editor Extender is very lightweight. It takes advantage of the HTML5 ContentEditable attribute so it does not require an iframe or complex browser workarounds. If you select View Source in your browser while using the HTML Editor Extender, we hope that you will be pleasantly surprised by how little markup and script is generated by the HTML Editor Extender. Customizable Toolbar Buttons Depending on the web application that you are building, you will want to display different toolbar buttons with the HTML Editor Extender. One of the design goals of the HTML Editor Extender was to make it very easy for you to customize the toolbar buttons. Imagine, for example, that you want to use the HTML Editor Extender when accepting comments on blog posts. In that case, you might want to restrict the type of formatting that a user can display. You might want to enable a user to format text as bold or italic but you do not want the user to make any other formatting changes. The following page illustrates how you can customize the HTML Editor Extender toolbar: <%@ Page Language="C#" AutoEventWireup="true" CodeBehind="CustomToolbar.aspx.cs" Inherits="WebApplication1.CustomToolbar" %> <%@ Register TagPrefix="asp" Namespace="AjaxControlToolkit" Assembly="AjaxControlToolkit" %> <html> <head runat="server"> <title>Custom Toolbar</title> </head> <body> <form id="form1" runat="server"> <asp:ToolkitScriptManager Runat="server" /> <asp:TextBox ID="txtComments" TextMode="MultiLine" Columns="50" Rows="10" Text="Hello <b>world!</b>" Runat="server" /> <asp:HtmlEditorExtender TargetControlID="txtComments" runat="server"> <Toolbar> <asp:Bold /> <asp:Italic /> </Toolbar> </asp:HtmlEditorExtender> </form> </body> </html> Notice that the HTML Editor Extender in the page above has a Toolbar subtag. You can list the toolbar buttons which you want to appear within the subtag. In the case above, only Bold and Italic buttons are displayed. Here is a complete list of the Toolbar buttons currently supported by the HTML Editor Extender: Undo Redo Bold Italic Underline StrikeThrough Subscript Superscript JustifyLeft JustifyCenter JustifyRight JustifyFull InsertOrderedList InsertUnorderedList CreateLink UnLink RemoveFormat SelectAll UnSelect Delete Cut Copy Paste BackgroundColorSelector ForeColorSelector FontNameSelector FontSizeSelector Indent Outdent InsertHorizontalRule HorizontalSeparator Of course the HTML Editor Extender was designed to be extensible. You can create your own buttons and add them to the control. Compatible with the AntiXSS Library When using the HTML Editor Extender on a public facing website, we strongly recommend that you use the HTML Editor Extender with the AntiXSS Library. If you allow users to submit arbitrary HTML, and you don’t take any action to strip out malicious markup, then you are opening your website to Cross-Site Scripting Attacks (XSS attacks). The HTML Editor Extender uses the Provider Model to support different Sanitizer Providers. The July 2011 release of the Ajax Control Toolkit ships with a single Sanitizer Provider which uses the AntiXSS library (see http://AntiXss.CodePlex.com ). A Sanitizer Provider is responsible for sanitizing HTML markup by removing any malicious elements, attributes, and attribute values. For example, the AntiXss Sanitizer Provider will take the following block of HTML: <b><a href=""javascript:doEvil()"">Visit Grandma</a></b> <script>doEvil()</script> And return the following sanitized block of HTML: <b><a href="">Visit Grandma</a></b> Notice that the JavaScript href and <SCRIPT> tag are both stripped out. Be aware that there are a depressingly large number of ways to sneak evil markup into your HTML. You definitely want a Sanitizer as a safety net. Before you can use the AntiXSS Sanitizer Provider, you must add three assemblies to your web application: AntiXSSLibrary.dll, HtmlSanitizationLibrary.dll, and SanitizerProviders.dll. All three assemblies are included with the CodePlex download of the Ajax Control Toolkit in the SanitizerProviders folder. Here’s how you modify your web.config file to use the AntiXSS Sanitizer Provider: <configuration> <configSections> <sectionGroup name="system.web"> <section name="sanitizer" requirePermission="false" type="AjaxControlToolkit.Sanitizer.ProviderSanitizerSection, AjaxControlToolkit"/> </sectionGroup> </configSections> <system.web> <compilation targetFramework="4.0" debug="true"/> <sanitizer defaultProvider="AntiXssSanitizerProvider"> <providers> <add name="AntiXssSanitizerProvider" type="AjaxControlToolkit.Sanitizer.AntiXssSanitizerProvider"></add> </providers> </sanitizer> </system.web> </configuration> You can detect whether the HTML Editor Extender is using the AntiXSS Sanitizer Provider by checking the HtmlEditorExtender SanitizerProvider property like this: if (MyHtmlEditorExtender.SanitizerProvider == null) { throw new Exception("Please enable the AntiXss Sanitizer!"); } When the SanitizerProvider property has the value null, you know that a Sanitizer Provider has not been configured in the web.config file. Because the AntiXSS library requires Full Trust, you cannot use the AntiXSS Sanitizer Provider with most shared website hosting providers. Because most shared hosting providers only support Medium Trust and not Full Trust, we do not recommend using the HTML Editor Extender with a public website hosted with a shared hosting provider. Why a New HTML Editor Control? The Ajax Control Toolkit now includes two HTML Editor controls. Why did we introduce a new HTML Editor control when there was already an existing HTML Editor? We think you will like the new HTML Editor much more than the previous one. We had several goals with the new HTML Editor Extender: Lightweight – We wanted to leverage HTML5 to create a lightweight HTML Editor. The new HTML Editor generates much less markup and script than the previous HTML Editor. Secure – We wanted to make it easy to integrate the AntiXSS library with the HTML Editor. If you are creating a public facing website, we strongly recommend that you use the AntiXSS Provider. Customizable – We wanted to make it easy for users to customize the toolbar buttons displayed by the HTML Editor. Compatibility – We wanted to ensure that the HTML Editor will work with the latest versions of the most popular browsers (including Internet Explorer 6 and higher). The old HTML Editor control is still included in the Ajax Control Toolkit and continues to live in the AjaxControlToolkit.HTMLEditor namespace. We have not modified the control and you can continue to use the control in the same way as you have used it in the past. However, we hope that you will consider migrating to the new HTML Editor Extender for the reasons listed above. Summary We’ve introduced a new Ajax Control Toolkit control with this release. I want to thank the developers and testers on the Superexpert team for the huge amount of work which they put into this control. It was a non-trivial task to build an entirely new control which has the complexity of the HTML Editor in less than 6 weeks. Please let us know what you think! We want to hear your feedback. If you discover issues with the new HTML Editor Extender control, or you have questions about the control, or you have ideas for how it can be improved, then please post them to this blog. Tomorrow starts a new sprint

    Read the article

  • SOA Suite Integration: Part 2: A basic BPEL process

    - by Anthony Shorten
    This is the next in the series about SOA Suite integration with Oracle Utilities Application Framework. One of the first scenarios I am going to illustrate in this series is building a basic BPEL process using Web Service calls to the Oracle Utilities Application Framework. The scenario is this. I will pass in the userid and the BPEL process will call our the AS-User Web Service we created in Part 1. This is just a basic test and illustrate how to import the Web Service into SOA Suite. To use this scenario, you will need access to Oracle SOA Suite, access to a copy of any Oracle Utilities Application Framework based product and Oracle JDeveloper (to build the process). First of all you need to start Oracle JDeveloper and create a new SOA Project to house the BPEL process in. For the purposes of this example I will call the project simpleBPEL and verify that SOA is part of the project. I will select "Composite with BPEL" to denote it as a BPEL process. I can also the same process to create a Mediator or OSB project (refer to the JDeveloper documentation on these technologies). For this example I will use BPEL 1.1 as my specification standard (BPEL 2.0 can also be used if desired). I give the individual BPEL process as simpleBPEL (you can use a different name but I wanted to keep the project and process the same for this example). I will also build a Synchronous BPEL Process as I want a response from the Web Service. I will leave the defaults to save time. I have no have a blank canvas to build my BPEL process against. Note: for simplicity I am going to use as much defaulting as possible. In fact I am not going to specify an input schema for the incoming call as I will use the basic single field used by BPEL as default. The first step is to import the AS-User Web Service into my BPEL project. To do this I use the standard Web Service BPEL component from the Component Palette to import the WSDL into the BPEL project. Now the tricky part (a joke), you drag and drop the component from the Palette onto the right side of the canvas in the Partner Links swim lane. This swim lane is reserved for Partner Links that have a Partner Role (i.e. being called rather than calling). When you drop the Web Service onto the canvas the Create Web Service wizard is invoked to ask for details of the Web Service. At this point you give the BPEL node a name. I have used the name RetrieveUser as a name. I placed the WSDL URL from the XAI Inbound Service screen in the WSDL URL. Once you specify the URL you can press the Find existing WSDL's button to load the information into BPEL from the call. You will notice the Port Type is prefilled with the port from the WSDL. I also suggest that you check copy wsdl and it's dependent artifacts into the project if you intending to work on the BPEL process offline. If you do not check this your target application must be accessible when you work on the BPEL process (that is not always convenient). Note: For the perceptive of you will notice that the URL specified in this example is different to the URL in the last post. The reason is for the demonstrations I shifted to a new server and did not redo all of the past screen captures. If you copy the WSDL into the project you will get an information screen about Localize Files. It is just a confirmation screen. The last confirmation screen is a summary of the partner link (the main tab is locked for editing at this stage). At this stage you have successfully imported the Web Service. To complete the setup of the Web Service you need to set the credentials for the Web Service to use. Refer to the past post on how to do that. Now to use the Web Service. To call the Web Service (as it is just imported not connected to the BPEL process yet), you must add an Invoke action to your BPEL Process. To do this, select Invoke action from the BPEL Constructs zone on the Component Palette and drop it on the edit nodes between the receiveInput and replyOutput nodes This will create an empty Invoke action. You will notice some connectors on the Invoke node. Grab the node closest to your Web Service and drag it to connect the Invoke to your Web Service. This instructs BPEL to use the Invoke to call the Web Service. Once the Invoke action is connected to the Web Service an Edit Invoke edit dialog is displayed. At this point I suggest you name the Invoke node. It is important to name the nodes straightaway and name them appropriately for you to trace the logic. I used InvokeUser as the name in this example. To complete the node configuration you must create Variables to hold the input and output for the call. To do this clock on Automatically Create Input Variable on the Edit Invoke dialog. You will be presented with a default variable name. It uses the node name (that is why it is important to name the node before hitting this button) as a prefix. You can name the variable anything but I usually take the default. Repeat the same for the output variable. You now have a completed node for invoking the service. You have a very basic BPEL process which contains an input, invoke and output node. It is not complete yet though. You need to tell the BPEL process how to pass data from the input to the invoke step and how to take the output from the service call and pass it back to the service. You need to now add an Assign node to assign the input to the Web Service. To do this select Assign activity from BPEL Constructs zone in the Component Palette. Drag and drop the Assign activity between the receiveInput and InvokeUser nodes as you want to pass data between these two nodes. You have now added a new Assign node to your BPEL process Double clicking the node allows you to specify the name of the node. I use AssignUser to describe that I am assigning user data. On the Copy Rules tab you can specify the mapping between the input variable InputVariable/payload/process/input string and the input variable for the Web Service call. We are passing data from the input to BPEL to the relevant input variable on the Web Service. This is simply drag and drop between the two data structures. In the example, I am using the input to pass to the user element in my Web Service as the user is the primary key for the object. The fields become linked (which means data from source will be copied to target). Almost there. You now need to process the output from the Web Service call to the outputVariable of the client call. I have decided to pass back one piece of data, the name associated with the user by concatenating the firstName and lastName elements from the Web Service call. To do this I will use a Transform as it is not just a matter of an Assign action. It is a concatenation operation. This also illustrates how you can use BPEL functionality to transform data from a Web Service call. As with the other components you drag and drop the Transform component to the appropriate place in the BPEL process. In this case we want to transform the output from the Web Service call so we want it after the InvokeUser action and the replyOutput action. The Transform component is actually part of the Oracle Extensions to the BPEL specification. Double clicking the Transform node will allow you to name the node.  In this example I used TransformName. To complete the transform I need to tell the product the source of the transformation and the target of the transform. In the example this is the InvokeUser output variable. I also named the mapper file to TransformName. By clicking the + or pencil icon next to the map I can create the map. The mapping screen is shows the source and target schemas for me to map across. As with the assign I can map the relevant elements. In my example, I first map the firstName from the Web Service to the result element. As I want to concatenate the names, I drop the concat function on the call line. I now attach the last name to the function to indicate the concatenation of the field. By default the names will be concatenated with no space. To make the name legible I add a space between the field by clicking the function and adding a space in the call. I now have a completed mapping. I can now save the whole project as my BPEL process is now complete. As you can see the following happens: We accept input from the client (the userid for the call) in the receiveInput step. We assign that value to the input parameters for the Web Service call in the AssignUser step. We invoke the Web Service call to retrieve the data from the product in the InvokeUser step. We take the output from the InvokeUser step and concatenate the names in the TransformName step. We pass back the data in the replyOutput step. At this point we can deploy the BPEL process to the SOA Suite server. I will not cover this aspect as it really all SOA Suite specific (it is all done via Oracle JDeveloper). Now we need to test the service in SOA Suite. We will use the Fusion Middleware Control test facility. I will assume that credentials have also been setup as per our previous post (else you will get a 401 error). You navigate to the deployed BPEL process within Fusion Middleware Control and select the Test Service option. Specify some test data on the payload at the bottom of the Test Service screen. In my case I am returning my own userid information. On the response tab you will see the result. It works. You can verify the steps using the Audit trace facility on individual calls. As you can see this is a basic BPEL but you get the idea of importing the Web Service is pretty straightforward. You can create more sophisticated BPEL processes using the full facilities in Oracle SOA Suite. I just showed you the basic principals.

    Read the article

  • EM12c Release 4: Cloud Control to Major Tom...

    - by abulloch
    With the latest release of Enterprise Manager 12c, Release 4 (12.1.0.4) the EM development team has added new functionality to assist the EM Administrator to monitor the health of the EM infrastructure.   Taking feedback delivered from customers directly and through customer advisory boards some nice enhancements have been made to the “Manage Cloud Control” sections of the UI, commonly known in the EM community as “the MTM pages” (MTM stands for Monitor the Monitor).  This part of the EM Cloud Control UI is viewed by many as the mission control for EM Administrators. In this post we’ll highlight some of the new information that’s on display in these redesigned pages and explain how the information they present can help EM administrators identify potential bottlenecks or issues with the EM infrastructure. The first page we’ll take a look at is the newly designed Repository information page.  You can get to this from the main Setup menu, through Manage Cloud Control, then Repository.  Once this page loads you’ll see the new layout that includes 3 tabs containing more drill-down information. The Repository Tab The first tab, Repository, gives you a series of 6 panels or regions on screen that display key information that the EM Administrator needs to review from time to time to ensure that their infrastructure is in good health. Rather than go through every panel let’s call out a few and let you explore the others later yourself on your own EM site.  Firstly, we have the Repository Details panel. At a glance the EM Administrator can see the current version of the EM repository database and more critically, three important elements of information relating to availability and reliability :- Is the database in Archive Log mode ? Is the database using Flashback ? When was the last database backup taken ? In this test environment above the answers are not too worrying, however, Production environments should have at least Archivelog mode enabled, Flashback is a nice feature to enable prior to upgrades (for fast rollback) and all Production sites should have a backup.  In this case the backup information in the Control file indicates there’s been no recorded backups taken. The next region of interest to note on this page shows key information around the Repository configuration, specifically, the initialisation parameters (from the spfile). If you’re storing your EM Repository in a Cluster Database you can view the parameters on each individual instance using the Instance Name drop-down selector in the top right of the region. Additionally, you’ll note there is now a check performed on the active configuration to ensure that you’re using, at the very least, Oracle minimum recommended values.  Should the values in your EM Repository not meet these requirements it will be flagged in this table with a red X for non-compliance.  You can of-course change these values within EM by selecting the Database target and modifying the parameters in the spfile (and optionally, the run-time values if the parameter allows dynamic changes). The last region to call out on this page before moving on is the new look Repository Scheduler Job Status region. This region is an update of a similar region seen on previous releases of the MTM pages in Cloud Control but there’s some important new functionality that’s been added that customers have requested. First-up - Restarting Repository Jobs.  As you can see from the graphic, you can now optionally select a job (by selecting the row in the UI table element) and click on the Restart Job button to take care of any jobs which have stopped or stalled for any reason.  Previously this needed to be done at the command line using EMDIAG or through a PL/SQL package invocation.  You can now take care of this directly from within the UI. Next, you’ll see that a feature has been added to allow the EM administrator to customise the run-time for some of the background jobs that run in the Repository.  We heard from some customers that ensuring these jobs don’t clash with Production backups, etc is a key requirement.  This new functionality allows you to select the pencil icon to edit the schedule time for these more resource intensive background jobs and modify the schedule to avoid clashes like this. Moving onto the next tab, let’s select the Metrics tab. The Metrics Tab There’s some big changes here, this page contains new information regions that help the Administrator understand the direct impact the in-bound metric flows are having on the EM Repository.  Many customers have provided feedback that they are in the dark about the impact of adding new targets or large numbers of new hosts or new target types into EM and the impact this has on the Repository.  This page helps the EM Administrator get to grips with this.  Let’s take a quick look at two regions on this page. First-up there’s a bubble chart showing a comprehensive view of the top resource consumers of metric data, over the last 30 days, charted as the number of rows loaded against the number of collections for the metric.  The size of the bubble indicates a relative volume.  You can see from this example above that a quick glance shows that Host metrics are the largest inbound flow into the repository when measured by number of rows.  Closely following behind this though are a large number of collections for Oracle Weblogic Server and Application Deployment.  Taken together the Host Collections is around 0.7Mb of data.  The total information collection for Weblogic Server and Application Deployments is 0.38Mb and 0.37Mb respectively. If you want to get this information breakdown on the volume of data collected simply hover over the bubble in the chart and you’ll get a floating tooltip showing the information. Clicking on any bubble in the chart takes you one level deeper into a drill-down of the Metric collection. Doing this reveals the individual metric elements for these target types and again shows a representation of the relative cost - in terms of Number of Rows, Number of Collections and Storage cost of data for each Metric type. Looking at another panel on this page we can see a different view on this data. This view shows a view of the Top N metrics (the drop down allows you to select 10, 15 or 20) and sort them by volume of data.  In the case above we can see the largest metric collection (by volume) in this case (over the last 30 days) is the information about OS Registered Software on a Host target. Taken together, these two regions provide a powerful tool for the EM Administrator to understand the potential impact of any new targets that have been discovered and promoted into management by EM12c.  It’s a great tool for identifying the cause of a sudden increase in Repository storage consumption or Redo log and Archive log generation. Using the information on this page EM Administrators can take action to mitigate any load impact by deploying monitoring templates to the targets causing most load if appropriate.   The last tab we’ll look at on this page is the Schema tab. The Schema Tab Selecting this tab brings up a window onto the SYSMAN schema with a focus on Space usage in the EM Repository.  Understanding what tablespaces are growing, at what rate, is essential information for the EM Administrator to stay on top of managing space allocations for the EM Repository so that it works as efficiently as possible and performs well for the users.  Not least because ensuring storage is managed well ensures continued availability of EM for monitoring purposes. The first region to highlight here shows the trend of space usage for the tablespaces in the EM Repository over time.  You can see the upward trend here showing that storage in the EM Repository is being consumed on an upward trend over the last few days here. This is normal as this EM being used here is brand new with Agents being added daily to bring targets into monitoring.  If your Enterprise Manager configuration has reached a steady state over a period of time where the number of new inbound targets is relatively small, the metric collection settings are fairly uniform and standardised (using Templates and Template Collections) you’re likely to see a trend of space allocation that plateau’s. The table below the trend chart shows the Top 20 Tables/Indexes sorted descending by order of space consumed.  You can switch the trend view chart and corresponding detail table by choosing a different tablespace in the EM Repository using the drop-down picker on the top right of this region. The last region to highlight on this page is the region showing information about the Purge policies in effect in the EM Repository. This information is useful to illustrate to EM Administrators the default purge policies in effect for the different categories of information available in the EM Repository.  Of course, it’s also been a long requested feature to have the ability to modify these default retention periods.  You can also do this using this screen.  As there are interdependencies between some data elements you can’t modify retention policies on a feature by feature basis.  Instead, retention policies take categories of information and bundles them together in Groups.  Retention policies are modified at the Group Level.  Understanding the impact of this really deserves a blog post all on it’s own as modifying these can have a significant impact on both the EM Repository’s storage footprint and it’s performance.  For now, we’re just highlighting the features visibility on these new pages. As a user of EM12c we hope the new features you see here address some of the feedback that’s been given on these pages over the past few releases.  We’ll look out for any comments or feedback you have on these pages ! 

    Read the article

< Previous Page | 9 10 11 12 13 14  | Next Page >