Search Results

Search found 5262 results on 211 pages for 'operation'.

Page 86/211 | < Previous Page | 82 83 84 85 86 87 88 89 90 91 92 93  | Next Page >

  • SQL SERVER – Weekly Series – Memory Lane – #052

    - by Pinal Dave
    Let us continue with the final episode of the Memory Lane Series. Here is the list of selected articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2007 Set Server Level FILLFACTOR Using T-SQL Script Specifies a percentage that indicates how full the Database Engine should make the leaf level of each index page during index creation or alteration. fillfactor must be an integer value from 1 to 100. The default is 0. Limitation of Online Index Rebuld Operation Online operation means when online operations are happening in the database are in normal operational condition, the processes which are participating in online operations does not require exclusive access to the database. Get Permissions of My Username / Userlogin on Server / Database A few days ago, I was invited to one of the largest database company. I was asked to review database schema and propose changes to it. There was special username or user logic was created for me, so I can review their database. I was very much interested to know what kind of permissions I was assigned per server level and database level. I did not feel like asking Sr. DBA the question about permissions. Simple Example of WHILE Loop With CONTINUE and BREAK Keywords This question is one of those questions which is very simple and most of the users get it correct, however few users find it confusing for the first time. I have tried to explain the usage of simple WHILE loop in the first example. BREAK keyword will exit the stop the while loop and control is moved to the next statement after the while loop. CONTINUE keyword skips all the statement after its execution and control is sent to the first statement of while loop. Forced Parameterization and Simple Parameterization – T-SQL and SSMS When the PARAMETERIZATION option is set to FORCED, any literal value that appears in a SELECT, INSERT, UPDATE or DELETE statement is converted to a parameter during query compilation. When the PARAMETERIZATION database option is SET to SIMPLE, the SQL Server query optimizer may choose to parameterize the queries. 2008 Transaction and Local Variables – Swap Variables – Update All At Once Concept Summary : Transaction have no effect over memory variables. When UPDATE statement is applied over any table (physical or memory) all the updates are applied at one time together when the statement is committed. First of all I suggest that you read the article listed above about the effect of transaction on local variant. As seen there local variables are independent of any transaction effect. Simulate INNER JOIN using LEFT JOIN statement – Performance Analysis Just a day ago, while I was working with JOINs I find one interesting observation, which has prompted me to create following example. Before we continue further let me make very clear that INNER JOIN should be used where it cannot be used and simulating INNER JOIN using any other JOINs will degrade the performance. If there are scopes to convert any OUTER JOIN to INNER JOIN it should be done with priority. 2009 Introduction to Business Intelligence – Important Terms & Definitions Business intelligence (BI) is a broad category of application programs and technologies for gathering, storing, analyzing, and providing access to data from various data sources, thus providing enterprise users with reliable and timely information and analysis for improved decision making. Difference Between Candidate Keys and Primary Key Candidate Key – A Candidate Key can be any column or a combination of columns that can qualify as unique key in database. There can be multiple Candidate Keys in one table. Each Candidate Key can qualify as Primary Key. Primary Key – A Primary Key is a column or a combination of columns that uniquely identify a record. Only one Candidate Key can be Primary Key. 2010 Taking Multiple Backup of Database in Single Command – Mirrored Database Backup I recently had a very interesting experience. In one of my recent consultancy works, I was told by our client that they are going to take the backup of the database and will also a copy of it at the same time. I expressed that it was surely possible if they were going to use a mirror command. In addition, they told me that whenever they take two copies of the database, the size of the database, is always reduced. Now this was something not clear to me, I said it was not possible and so I asked them to show me the script. Corrupted Backup File and Unsuccessful Restore The CTO, who was also present at the location, got very upset with this situation. He then asked when the last successful restore test was done. As expected, the answer was NEVER.There were no successful restore tests done before. During that time, I was present and I could clearly see the stress, confusion, carelessness and anger around me. I did not appreciate the feeling and I was pretty sure that no one in there wanted the atmosphere like me. 2011 TRACEWRITE – Wait Type – Wait Related to Buffer and Resolution SQL Trace is a SQL Server database engine technology which monitors specific events generated when various actions occur in the database engine. When any event is fired it goes through various stages as well various routes. One of the routes is Trace I/O Provider, which sends data to its final destination either as a file or rowset. DATEDIFF – Accuracy of Various Dateparts If you want to have accuracy in seconds, you need to use a different approach. In the first example, the accurate method is to find the number of seconds first and then divide it by 60 to convert it in minutes. Dedicated Access Control for SQL Server Express Edition http://www.youtube.com/watch?v=1k00z82u4OI Book Signing at SQLPASS 2012 Who I Am And How I Got Here – True Story as Blog Post If there was a shortcut to success – I want to know. I learnt SQL Server hard way and I am still learning. There are so many things, I have to learn. There is not enough time to learn everything which we want to learn. I am constantly working on it every day. I welcome you to join my journey as well. Please join me in my journey to learn SQL Server – more the merrier. Vacation, Travel and Study – A New Concept Even those who have advanced degrees and went to college for years, or even decades, find studying hard.  There is a difference between studying for a career and studying for a certification.  At least to get a degree there is a variety of subjects, with labs, exams, and practice problems to make things more interesting. Order By Numeric Values Formatted as String We have a table which has a column containing alphanumeric data. The data always has first as an integer and later part as a string. The business need is to order the data based on the first part of the alphanumeric data which is an integer. Now the problem is that no matter how we use ORDER BY the result is not produced as expected. Let us understand this with an example. Resolving SQL Server Connection Errors – SQL in Sixty Seconds #030 – Video One of the most famous errors related to SQL Server is about connecting to SQL Server itself. Here is how it goes, most of the time developers have worked with SQL Server and knows pretty much every error which they face during development language. However, hardly they install fresh SQL Server. As the installation of the SQL Server is a rare occasion unless you are a DBA who is responsible for such an instance – the error faced during installations are pretty rare as well. http://www.youtube.com/watch?v=1k00z82u4OI Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Package management fails in update-manager with gzip problems and compilation errors. U12.04LTS

    - by HarveyP
    Similar to but not the same as Package management system corrupted. Cannot install or remove packages. U12.04LTS (an earlier problem) with package management system. Followed all of L. D. James suggestions in his answer to no avail. This time as well as the gzip error I am also getting compilation errors. The difference may be due to a lack of compilation in my earlier problem so it may be the same error. The packages concerned are enumerated in the output from update-manager below. Also included below that is the output from apt-get -f install apt-get autoremove gives same output. Tried update without SSL updates - 9 to install and got "Unhandled Error in aptdaemon". Output number 3 below. One at a time - output 4 - is for firefox, first in the list of packages. Falls over at libssl1.0.0 despite deselection of it from update ... Tried apt-get install --reinstall dpkg which succeeded, apt-get install --reinstall tar and apt-get install --reinstall gzip both of which failed at libssl1.0.0 as ever. (as suggested by Subv3rsion elsewhere in this forum) Now cannot apt-get update with complete success even after changing server and apt-get clean - output number 5 below ... 1). Output from update-manager The following packages will be upgraded:<> firefox firefox-globalmenu firefox-locale-en libavcodec-extra-53 libavformat53 libavutil-extra-51 libjson0 libpostproc52 libssl1.0.0 libswscale2 openssl 11 to upgrade, 0 to newly install, 0 to remove and 0 not to upgrade.<br> Need to get 0 B/46.5 MB of archives. After this operation, 1,416 kB of additional disk space will be used.<br> Do you want to continue [Y/n]? y debconf: Perl may be unconfigured (Bareword "gensym" not allowed while "strict subs" in use at /usr/lib/perl/5.14/IO/Handle.pm line 67. BEGIN not safe after errors--compilation aborted at /usr/lib/perl/5.14/IO/Handle.pm line 366. Compilation failed in require at /usr/lib/perl/5.14/IO/Seekable.pm line 9. BEGIN failed--compilation aborted at /usr/lib/perl/5.14/IO/Seekable.pm line 9. Compilation failed in require at /usr/lib/perl/5.14/IO/File.pm line 11. BEGIN failed--compilation aborted at /usr/lib/perl/5.14/IO/File.pm line 11. Compilation failed in require at /usr/share/perl/5.14/FileHandle.pm line 9. Compilation failed in require at (eval 1) line 3. BEGIN failed--compilation aborted at (eval 1) line 3. ) -- aborting (Reading database ... 160575 files and directories currently installed.) Preparing to replace libssl1.0.0 1.0.1-4ubuntu5.14 (using .../libssl1.0.0_1.0.1-4ubuntu5.15_i386.deb) ... Unpacking replacement libssl1.0.0 ... dpkg-deb (subprocess): data: internal gzip read error: '<fd:4>: data error' dpkg-deb: error: subprocess <decompress> returned error exit status 2 dpkg: error processing /var/cache/apt/archives/libssl1.0.0_1.0.1-4ubuntu5.15_i386.deb (--unpack):<br> subprocess dpkg-deb --fsys-tarfile returned error exit status 2 No apport report written because MaxReports has already been reached Bareword "gensym" not allowed while "strict subs" in use at /usr/lib/perl/5.14/IO/Handle.pm line 67. BEGIN not safe after errors--compilation aborted at /usr/lib/perl/5.14/IO/Handle.pm line 366. Compilation failed in require at /usr/lib/perl/5.14/IO/Seekable.pm line 9. BEGIN failed--compilation aborted at /usr/lib/perl/5.14/IO/Seekable.pm line 9. Compilation failed in require at /usr/lib/perl/5.14/IO/File.pm line 11. BEGIN failed--compilation aborted at /usr/lib/perl/5.14/IO/File.pm line 11. Compilation failed in require at /usr/share/perl/5.14/FileHandle.pm line 9. Compilation failed in require at /usr/share/perl5/Debconf/Template.pm line 8. BEGIN failed--compilation aborted at /usr/share/perl5/Debconf/Template.pm line 8. Compilation failed in require at /usr/share/perl5/Debconf/Question.pm line 8. BEGIN failed--compilation aborted at /usr/share/perl5/Debconf/Question.pm line 8. Compilation failed in require at /usr/share/perl5/Debconf/Config.pm line 7. BEGIN failed--compilation aborted at /usr/share/perl5/Debconf/Config.pm line 7. Compilation failed in require at /usr/share/perl5/Debconf/Log.pm line 10. Compilation failed in require at /usr/share/perl5/Debconf/Db.pm line 7. BEGIN failed--compilation aborted at /usr/share/perl5/Debconf/Db.pm line 7. Compilation failed in require at /usr/share/debconf/frontend line 6. BEGIN failed--compilation aborted at /usr/share/debconf/frontend line 6. dpkg: error whale cleanang up: subprgcess installed post-installation script returned error exit status 2 Errors were encountered while processing: /var/cache/apt/archives/libssl1.0.0_1.0.1-4ubuntu5.15_i386.deb E: Sub-process /usr/bin/dpkg returned an error code (1) 2). Output from install -f harveyp@harveyp:~$ sudo apt-get -f install [sudo] password for harveyp: Reading package lists... Done Building dependency tree Reading state information... Done 0 to upgrade, 0 to newly install, 0 to remove and 11 not to upgrade. 1 not fully installed or removed.<br> After this operation, 0 B of additional disk space will be used. E: Internal Error, No file name for libssl1.0.0 3). Unhandled error from aptdaemon Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/aptdaemon/worker.py", line 1045, in _simulate trans.unauthenticated = self.__simulate(trans) File "/usr/lib/python2.7/dist-packages/aptdaemon/worker.py", line 1160, in __simulate unauthenticated = self._get_unauthenticated() File "/usr/lib/python2.7/dist-packages/aptdaemon/worker.py", line 347, in _get_unauthenticated for pkg in self._iterate_packages(): File "/usr/lib/python2.7/dist-packages/aptdaemon/worker.py", line 1356, in _iterate_packages for enum, pkg in enumerate(self._cache): File "/usr/lib/python2.7/dist-packages/apt/cache.py", line 216, in __iter__ yield self[pkgname] File "/usr/lib/python2.7/dist-packages/apt/cache.py", line 201, in __getitem__ pkg = self._weakref[key] = Package(self, self._cache[key]) KeyError: 'librqrcode-rubq-doc 4). output from update of firefox installArchives() failed: Error in function: < Setting up libssl1.0.0 (1.0.1-4ubuntu5.14) ... Bareword "gensym" not allowed while "strict subs" in use at /usr/lib/perl/5.14/IO/Handle.pm line 67. BEGIN not safe after errors--compilation aborted at /usr/lib/perl/5.14/IO/Handle.pm line 366. Compilation failed in require at /usr/lib/perl/5.14/IO/Seekable.pm line 9. BEGIN failed--compilation aborted at /usr/lib/perl/5.14/IO/Seekable.pm line 9. Compilation failed in require at /usr/lib/perl/5.14/IO/File.pm line 11. BEGIN failed--compilation aborted at /usr/lib/perl/5.14/IO/File.pm line 11. Compilation failed in require at /usr/share/perl/5.14/FileHandle.pm line 9. Compilation failed in require at /usr/share/perl5/Debconf/Template.pm line 8. BEGIN failed--compilation aborted at /usr/share/perl5/Debconf/Template.pm line 8. Compilation failed in require at /usr/share/perl5/Debconf/Question.pm line 8. BEGIN failed--compilation aborted at /usr/share/perl5/Debconf/Question.pm line 8. Compilation failed in require at /usr/share/perl5/Debconf/Config.pm line 7. BEGIN failed--compilation aborted at /usr/share/perl5/Debconf/Config.pm line 7. Compilation failed in require at /usr/share/perl5/Debconf/Log.pm line 10. 5. output from apt-get update ...snip ... Hit http://ubuntu-archive.mirrors.free.org precise-security/multiverse Translation-en Hit http://ubuntu-archive.mirrors.free.org precise-security/restricted Translation-en Hit http://ubuntu-archive.mirrors.free.org precise-security/universe Translation-en Fetched 368 kB in 6s (59.5 kB/s) W: Failed to fetch gzip:/var/lib/apt/lists/partial/ubuntu-archive.mirrors.free.org_ubuntu_dists_precise_universe_source_Sources Hash Sum mismatch E: Some index files failed to download. They have been ignored, or old ones used instead.

    Read the article

  • How can I remove old kernels/install new ones when /boot is full?

    - by Marcel
    I know this question is asked many times before, however with me it is just a bit different I guess. # df -h Filesystem Size Used Avail Use% Mounted on /dev/sda3 224G 5.2G 208G 3% / udev 1.9G 4.0K 1.9G 1% /dev tmpfs 777M 260K 777M 1% /run none 5.0M 0 5.0M 0% /run/lock none 1.9G 0 1.9G 0% /run/shm /dev/sda2 90M 88M 0 100% /boot /dev/sda6 1.9G 514M 1.3G 29% /tmp My boot partition is full. Current Kernel: # uname -r 3.2.0-35-generic All Kernels: # dpkg --list | grep linux-image ii linux-image-3.2.0-32-generic 3.2.0-32.51 Linux kernel image for version 3.2.0 on 64 bit x86 SMP ii linux-image-3.2.0-34-generic 3.2.0-34.53 Linux kernel image for version 3.2.0 on 64 bit x86 SMP ii linux-image-3.2.0-35-generic 3.2.0-35.55 Linux kernel image for version 3.2.0 on 64 bit x86 SMP iF linux-image-3.2.0-37-generic 3.2.0-37.58 Linux kernel image for version 3.2.0 on 64 bit x86 SMP iF linux-image-3.2.0-38-generic 3.2.0-38.60 Linux kernel image for version 3.2.0 on 64 bit x86 SMP iU linux-image-generic 3.2.0.37.45 Generic Linux kernel image So I thought of removing the 3.2.0.32-generic kernel with: # sudo apt-get purge linux-image-3.2.0-32-generic Reading package lists... Done Building dependency tree Reading state information... Done You might want to run 'apt-get -f install' to correct these: The following packages have unmet dependencies: linux-generic : Depends: linux-headers-generic (= 3.2.0.37.45) but 3.2.0.38.46 is to be installed E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). No success. When I try apt-get -f install it also fails: # apt-get -f install Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following packages were automatically installed and are no longer required: linux-headers-3.2.0-34 linux-headers-3.2.0-35 linux-headers-3.2.0-34-generic linux-headers-3.2.0-35-generic Use 'apt-get autoremove' to remove them. The following extra packages will be installed: linux-generic linux-image-generic The following packages will be upgraded: linux-generic linux-image-generic 2 upgraded, 0 newly installed, 0 to remove and 9 not upgraded. 5 not fully installed or removed. Need to get 0 B/4,334 B of archives. After this operation, 0 B of additional disk space will be used. Do you want to continue [Y/n]? y Setting up initramfs-tools (0.99ubuntu13.1) ... update-initramfs: deferring update (trigger activated) Setting up linux-image-3.2.0-37-generic (3.2.0-37.58) ... Running depmod. update-initramfs: deferring update (hook will be called later) The link /initrd.img is a dangling linkto /boot/initrd.img-3.2.0-38-generic Examining /etc/kernel/postinst.d. run-parts: executing /etc/kernel/postinst.d/initramfs-tools 3.2.0-37-generic /boot/vmlinuz-3.2.0-37-generic update-initramfs: Generating /boot/initrd.img-3.2.0-37-generic gzip: stdout: No space left on device E: mkinitramfs failure cpio 141 gzip 1 update-initramfs: failed for /boot/initrd.img-3.2.0-37-generic with 1. run-parts: /etc/kernel/postinst.d/initramfs-tools exited with return code 1 Failed to process /etc/kernel/postinst.d at /var/lib/dpkg/info/linux-image-3.2.0-37-generic.postinst line 1010. dpkg: error processing linux-image-3.2.0-37-generic (--configure): subprocess installed post-installation script returned error exit status 2 Setting up linux-image-3.2.0-38-generic (3.2.0-38.60) ... Running depmod. update-initramfs: deferring update (hook will be called later) The link /initrd.img is a dangling linkto /boot/initrd.img-3.2.0-37-generic Examining /etc/kernel/postinst.d. run-parts: executing /etc/kernel/postinst.d/initramfs-tools 3.2.0-38-generic /boot/vmlinuz-3.2.0-38-generic update-initramfs: Generating /boot/initrd.img-3.2.0-38-generic gzip: stdout: No space left on device E: mkinitramfs failure cpio 141 gzip 1 update-initramfs: failed for /boot/initrd.img-3.2.0-38-generic with 1. run-parts: /etc/kernel/postinst.d/initramfs-tools exited with return code 1 Failed to process /etc/kernel/postinst.d at /var/lib/dpkg/info/linux-image-3.2.0-38-generic.postinst line 1010. dpkg: error processing linux-image-3.2.0-38-generic (--configure): subprocess installed post-installation script returned error exit status 2 dpkg: dependency problems prevent configuration of linux-image-generic: linux-image-generic depends on linux-image-3.2.0-37-generic; however: Package linux-image-3.2.0-37-generic is not configured yet. dpkg: error processing linux-image-generic (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of linux-generic: linux-generic depends on linux-image-generic (= 3.2.0.37.45); however: Package linux-image-generic is not configured yet. linux-generic depends on linux-headers-generic (= 3.2.0.37.45); however: Version of linux-headers-generic on system is 3.2.0.38.46. dpkg: error processing linux-generic (--configure): dependency problems - leaving unconfigured Processing triggers for initramfs-tools ... No apport report written because the error message indicates its a followup error from a previous failure. No apport report written because MaxReports is reached already update-initramfs: Generating /boot/initrd.img-3.2.0-35-generic gzip: stdout: No space left on device E: mkinitramfs failure cpio 141 gzip 1 update-initramfs: failed for /boot/initrd.img-3.2.0-35-generic with 1. dpkg: error processing initramfs-tools (--configure): subprocess installed post-installation script returned error exit status 1 No apport report written because MaxReports is reached already Errors were encountered while processing: linux-image-3.2.0-37-generic linux-image-3.2.0-38-generic linux-image-generic linux-generic initramfs-tools E: Sub-process /usr/bin/dpkg returned an error code (1) Any help would really be appreciated. Update: I did: sudo rm /boot/*-3.2.0-32-generic /boot/*-3.2.0-34-generic After that the following problem with apt-get -f install: root@localhost:/# apt-get -f install Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following extra packages will be installed: linux-generic The following packages will be upgraded: linux-generic 1 upgraded, 0 newly installed, 0 to remove and 9 not upgraded. 1 not fully installed or removed. Need to get 0 B/1,722 B of archives. After this operation, 0 B of additional disk space will be used. Do you want to continue [Y/n]? y dpkg: dependency problems prevent configuration of linux-generic: linux-generic depends on linux-image-generic (= 3.2.0.37.45); however: Version of linux-image-generic on system is 3.2.0.38.46. linux-generic depends on linux-headers-generic (= 3.2.0.37.45); however: Version of linux-headers-generic on system is 3.2.0.38.46. dpkg: error processing linux-generic (--configure): dependency problems - leaving unconfigured No apport report written because the error message indicates its a followup error from a previous failure. Errors were encountered while processing: linux-generic E: Sub-process /usr/bin/dpkg returned an error code (1)

    Read the article

  • CodePlex Daily Summary for Monday, June 02, 2014

    CodePlex Daily Summary for Monday, June 02, 2014Popular ReleasesPortable Class Library for SQLite: Portable Class Library for SQLite - 3.8.4.4: This pull request from mattleibow addresses an issue with custom function creation (define functions in C# code and invoke them from SQLite as id they where regular SQL functions). Impact: Xamarin iOSTweetinvi a friendly Twitter C# API: Tweetinvi 0.9.3.x: Timelines- Added all the parameters available from the Timeline Endpoints in Tweetinvi. - This is available for HomeTimeline, UserTimeline, MentionsTimeline // Simple query var tweets = Timeline.GetHomeTimeline(); // Create a parameter for queries with specific parameters var timelineParameter = Timeline.GenerateHomeTimelineRequestParameter(); timelineParameter.ExcludeReplies = true; timelineParameter.TrimUser = true; var tweets = Timeline.GetHomeTimeline(timelineParameter); Tweets- Add mis...Sandcastle Help File Builder: Help File Builder and Tools v2014.5.31.0: General InformationIMPORTANT: On some systems, the content of the ZIP file is blocked and the installer may fail to run. Before extracting it, right click on the ZIP file, select Properties, and click on the Unblock button if it is present in the lower right corner of the General tab in the properties dialog. This release completes removal of the branding transformations and implements the new VS2013 presentation style that utilizes the new lightweight website format. Several breaking cha...Tooltip Web Preview: ToolTip Web Preview: Version 1.0Database Helper: Release 1.0.0.0: First Release of Database HelperCoMaSy: CoMaSy1.0.2: !Contact Management SystemImage View Slider: Image View Slider: This is a .NET component. We create this using VB.NET. Here you can use an Image Viewer with several properties to your application form. We wish somebody to improve freely. Try this out! Author : Steven Renaldo Antony Yustinus Arjuna Purnama Putra Andre Wijaya P Martin Lidau PBK GENAP 2014 - TI UKDWAspose for Apache POI: Missing Features of Apache POI WP - v 1.1: Release contain the Missing Features in Apache POI WP SDK in Comparison with Aspose.Words for dealing with Microsoft Word. What's New ?Following Examples: Insert Picture in Word Document Insert Comments Set Page Borders Mail Merge from XML Data Source Moving the Cursor Feedback and Suggestions Many more examples are yet to come here. Keep visiting us. Raise your queries and suggest more examples via Aspose Forums or via this social coding site.SEToolbox: 01.032.014 Release 1: Added fix when loading game Textures for icons causing 'Unable to read beyond the end of the stream'. Added new Resource Report, that displays all in game resources in a concise report. Added in temp directory cleaner, to keep excess files from building up. Fixed use of colors on the windows, to work better with desktop schemes. Adding base support for multilingual resources. This will allow loading of the Space Engineers resources to show localized names, and display localized date a...ClosedXML - The easy way to OpenXML: ClosedXML 0.71.2: More memory and performance improvements. Fixed an issue with pivot table field order.Vi-AIO SearchBar: Vi – AIO Search Bar: Version 1.0Top Verses ( Ayat Emas ): Binary Top Verses: This one is the bin folder of the component. the .dll component is inside.Traditional Calendar Component: Traditional Calender Converter: Duta Wacana Christian University This file containing Traditional Calendar Component and Demo Aplication that using Traditional Calendar Component. This component made with .NET Framework 4 and the programming language is C# .SQLSetupHelper: 1.0.0.0: First Stable Version of SQL SetupComposite Iconote: Composite Iconote: This is a composite has been made by Microsoft Visual Studio 2013. Requirement: To develop this composite or use this component in your application, your computer must have .NET framework 4.5 or newer.Magick.NET: Magick.NET 6.8.9.101: Magick.NET linked with ImageMagick 6.8.9.1. Breaking changes: - Int/short Set methods of WritablePixelCollection are now unsigned. - The Q16 build no longer uses HDRI, switch to the new Q16-HDRI build if you need HDRI.fnr.exe - Find And Replace Tool: 1.7: Bug fixes Refactored logic for encoding text values to command line to handle common edge cases where find/replace operation works in GUI but not in command line Fix for bug where selection in Encoding drop down was different when generating command line in some cases. It was reported in: https://findandreplace.codeplex.com/workitem/34 Fix for "Backslash inserted before dot in replacement text" reported here: https://findandreplace.codeplex.com/discussions/541024 Fix for finding replacing...VG-Ripper & PG-Ripper: VG-Ripper 2.9.59: changes NEW: Added Support for 'GokoImage.com' links NEW: Added Support for 'ViperII.com' links NEW: Added Support for 'PixxxView.com' links NEW: Added Support for 'ImgRex.com' links NEW: Added Support for 'PixLiv.com' links NEW: Added Support for 'imgsee.me' links NEW: Added Support for 'ImgS.it' linksToolbox for Dynamics CRM 2011/2013: XrmToolBox (v1.2014.5.28): XrmToolbox improvement XrmToolBox updates (v1.2014.5.28)Fix connecting to a connection with custom authentication without saved password Tools improvement New tool!Solution Components Mover (v1.2014.5.22) Transfer solution components from one solution to another one Import/Export NN relationships (v1.2014.3.7) Allows you to import and export many to many relationships Tools updatesAttribute Bulk Updater (v1.2014.5.28) Audit Center (v1.2014.5.28) View Layout Replicator (v1.2014.5.28) Scrip...Microsoft Ajax Minifier: Microsoft Ajax Minifier 5.10: Fix for Issue #20875 - echo switch doesn't work for CSS CSS should honor the SASS source-file comments JS should allow multi-line comment directivesNew Projects[ISEN] Rendu de projet Naughty3Dogs - Pong3D: Pong3D est un jeu qui reprend le principe classique du Pong en le portant dans un environnement 3D à l'aide du langage c# et du moteur Unity3DBootstrap for MVC: Bootstrap for MVC.F. A. Q. - Najczesciej zadawane pytania: FAQForuMvc: Technifutur short projecthomework456: no.iStoody: Studies organize solution, available through app for Windows and Windows Phone.liaoliao: ???????????Price Tracker: Allows a user to track prices based on parsed emailsRoslynEval: RoslynRx Hub: Rx Hub provides server side computation which initiate by subscriber requestSharepoint Online AppCache Reset: We are an IT resource company providing Virtual IT services and custom and opensource programs for everyday needs. UnitConversionLib : Smart Unit Conversion Library in C#: Conversion of units, arithmetic operation and parsing quantities with their units on run time. Smart unit converter and conversion lib for physical quantities,

    Read the article

  • When is a Seek not a Seek?

    - by Paul White
    The following script creates a single-column clustered table containing the integers from 1 to 1,000 inclusive. IF OBJECT_ID(N'tempdb..#Test', N'U') IS NOT NULL DROP TABLE #Test ; GO CREATE TABLE #Test ( id INTEGER PRIMARY KEY CLUSTERED ); ; INSERT #Test (id) SELECT V.number FROM master.dbo.spt_values AS V WHERE V.[type] = N'P' AND V.number BETWEEN 1 AND 1000 ; Let’s say we need to find the rows with values from 100 to 170, excluding any values that divide exactly by 10.  One way to write that query would be: SELECT T.id FROM #Test AS T WHERE T.id IN ( 101,102,103,104,105,106,107,108,109, 111,112,113,114,115,116,117,118,119, 121,122,123,124,125,126,127,128,129, 131,132,133,134,135,136,137,138,139, 141,142,143,144,145,146,147,148,149, 151,152,153,154,155,156,157,158,159, 161,162,163,164,165,166,167,168,169 ) ; That query produces a pretty efficient-looking query plan: Knowing that the source column is defined as an INTEGER, we could also express the query this way: SELECT T.id FROM #Test AS T WHERE T.id >= 101 AND T.id <= 169 AND T.id % 10 > 0 ; We get a similar-looking plan: If you look closely, you might notice that the line connecting the two icons is a little thinner than before.  The first query is estimated to produce 61.9167 rows – very close to the 63 rows we know the query will return.  The second query presents a tougher challenge for SQL Server because it doesn’t know how to predict the selectivity of the modulo expression (T.id % 10 > 0).  Without that last line, the second query is estimated to produce 68.1667 rows – a slight overestimate.  Adding the opaque modulo expression results in SQL Server guessing at the selectivity.  As you may know, the selectivity guess for a greater-than operation is 30%, so the final estimate is 30% of 68.1667, which comes to 20.45 rows. The second difference is that the Clustered Index Seek is costed at 99% of the estimated total for the statement.  For some reason, the final SELECT operator is assigned a small cost of 0.0000484 units; I have absolutely no idea why this is so, or what it models.  Nevertheless, we can compare the total cost for both queries: the first one comes in at 0.0033501 units, and the second at 0.0034054.  The important point is that the second query is costed very slightly higher than the first, even though it is expected to produce many fewer rows (20.45 versus 61.9167). If you run the two queries, they produce exactly the same results, and both complete so quickly that it is impossible to measure CPU usage for a single execution.  We can, however, compare the I/O statistics for a single run by running the queries with STATISTICS IO ON: Table '#Test'. Scan count 63, logical reads 126, physical reads 0. Table '#Test'. Scan count 01, logical reads 002, physical reads 0. The query with the IN list uses 126 logical reads (and has a ‘scan count’ of 63), while the second query form completes with just 2 logical reads (and a ‘scan count’ of 1).  It is no coincidence that 126 = 63 * 2, by the way.  It is almost as if the first query is doing 63 seeks, compared to one for the second query. In fact, that is exactly what it is doing.  There is no indication of this in the graphical plan, or the tool-tip that appears when you hover your mouse over the Clustered Index Seek icon.  To see the 63 seek operations, you have click on the Seek icon and look in the Properties window (press F4, or right-click and choose from the menu): The Seek Predicates list shows a total of 63 seek operations – one for each of the values from the IN list contained in the first query.  I have expanded the first seek node to show the details; it is seeking down the clustered index to find the entry with the value 101.  Each of the other 62 nodes expands similarly, and the same information is contained (even more verbosely) in the XML form of the plan. Each of the 63 seek operations starts at the root of the clustered index B-tree and navigates down to the leaf page that contains the sought key value.  Our table is just large enough to need a separate root page, so each seek incurs 2 logical reads (one for the root, and one for the leaf).  We can see the index depth using the INDEXPROPERTY function, or by using the a DMV: SELECT S.index_type_desc, S.index_depth FROM sys.dm_db_index_physical_stats ( DB_ID(N'tempdb'), OBJECT_ID(N'tempdb..#Test', N'U'), 1, 1, DEFAULT ) AS S ; Let’s look now at the Properties window when the Clustered Index Seek from the second query is selected: There is just one seek operation, which starts at the root of the index and navigates the B-tree looking for the first key that matches the Start range condition (id >= 101).  It then continues to read records at the leaf level of the index (following links between leaf-level pages if necessary) until it finds a row that does not meet the End range condition (id <= 169).  Every row that meets the seek range condition is also tested against the Residual Predicate highlighted above (id % 10 > 0), and is only returned if it matches that as well. You will not be surprised that the single seek (with a range scan and residual predicate) is much more efficient than 63 singleton seeks.  It is not 63 times more efficient (as the logical reads comparison would suggest), but it is around three times faster.  Let’s run both query forms 10,000 times and measure the elapsed time: DECLARE @i INTEGER, @n INTEGER = 10000, @s DATETIME = GETDATE() ; SET NOCOUNT ON; SET STATISTICS XML OFF; ; WHILE @n > 0 BEGIN SELECT @i = T.id FROM #Test AS T WHERE T.id IN ( 101,102,103,104,105,106,107,108,109, 111,112,113,114,115,116,117,118,119, 121,122,123,124,125,126,127,128,129, 131,132,133,134,135,136,137,138,139, 141,142,143,144,145,146,147,148,149, 151,152,153,154,155,156,157,158,159, 161,162,163,164,165,166,167,168,169 ) ; SET @n -= 1; END ; PRINT DATEDIFF(MILLISECOND, @s, GETDATE()) ; GO DECLARE @i INTEGER, @n INTEGER = 10000, @s DATETIME = GETDATE() ; SET NOCOUNT ON ; WHILE @n > 0 BEGIN SELECT @i = T.id FROM #Test AS T WHERE T.id >= 101 AND T.id <= 169 AND T.id % 10 > 0 ; SET @n -= 1; END ; PRINT DATEDIFF(MILLISECOND, @s, GETDATE()) ; On my laptop, running SQL Server 2008 build 4272 (SP2 CU2), the IN form of the query takes around 830ms and the range query about 300ms.  The main point of this post is not performance, however – it is meant as an introduction to the next few parts in this mini-series that will continue to explore scans and seeks in detail. When is a seek not a seek?  When it is 63 seeks © Paul White 2011 email: [email protected] twitter: @SQL_kiwi

    Read the article

  • Orchestrating the Virtual Enterprise

    - by John Murphy
    During the American Industrial Revolution, the Ford Motor Company did it all. It turned raw materials into a showroom full of Model Ts. It owned a steel mill, a glass factory, and an automobile assembly line. The company was both self-sufficient and innovative and went on to become one of the largest and most profitable companies in the world. Nowadays, it's unusual for any business to follow this vertical integration model because its much harder to be best in class across such a wide a range of capabilities and services. Instead, businesses focus on their core competencies and outsource other business functions to specialized suppliers. They exchange vertical integration for collaboration. When done well, all parties benefit from this arrangement and the collaboration leads to the creation of an agile, lean and successful "virtual enterprise." Case in point: For Sun hardware, Oracle outsources most of its manufacturing and all of its logistics to third parties. These are vital activities, but ones where Oracle doesn't have a core competency, so we shift them to business partners who do. Within our enterprise, we always retain the core functions of product development, support, and most of the sales function, because that's what constitutes our core value to our customers. This is a perfect example of a virtual enterprise.  What are the implications of this? It means that we must exchange direct internal control for indirect external collaboration. This fundamentally changes the relative importance of different business processes, the boundaries of security and information sharing, and the relationship of the supply chain systems to the ERP. The challenge is that the systems required to support this virtual paradigm are still mired in "island enterprise" thinking. But help is at hand. Developments such as the Web, social networks, collaboration, and rules-based orchestration offer great potential to fundamentally re-architect supply chain systems to better support the virtual enterprise.  Supply Chain Management Systems in a Virtual Enterprise Historically enterprise software was constructed to automate the ERP - and then the supply chain systems extended the ERP. They were joined at the hip. In virtual enterprises, the supply chain system needs to be ERP agnostic, sitting above each of the ERPs that are distributed across the virtual enterprise - most of which are operating in other businesses. This is vital so that the supply chain system can manage the flow of material and the related information through the multiple enterprises. It has to have strong collaboration tools. It needs to be highly flexible. Users need to be able to see information that's coming from multiple sources and be able to react and respond to events across those sources.  Oracle Fusion Distributed Order Orchestration (DOO) is a perfect example of a supply chain system designed to operate in this virtual way. DOO embraces the idea that a company's fulfillment challenge is a distributed, multi-enterprise problem. It enables users to manage the process and the trading partners in a uniform way and deliver a consistent user experience while operating over a heterogeneous, virtual enterprise. This is a fundamental shift at the core of managing supply chains. It forces virtual enterprises to think architecturally about how best to construct their supply chain systems.  Case in point, almost everyone has ordered from Amazon.com at one time or another. Our orders are as likely to be fulfilled by third parties as they are by Amazon itself. To deliver the order promptly and efficiently, Amazon has to send it to the right fulfillment location and know the availability in that location. It needs to be able to track status of the fulfillment and deal with exceptions. As a virtual enterprise, Amazon's operations, using thousands of trading partners, requires a very different approach to fulfillment than the traditional 'take an order and ship it from your own warehouse' model. Amazon had no choice but to develop a complex, expensive and custom solution to tackle this problem as there used to be no product solution available. Now, other companies who want to follow similar models have a better off-the-shelf choice -- Oracle Distributed Order Orchestration (DOO).  Consider how another of our customers is using our distributed orchestration solution. This major airplane manufacturer has a highly complex business and interacts regularly with the U.S. Government and major airlines. It sits in the middle of an intricate supply chain and needed to improve visibility across its many different entities. Oracle Fusion DOO gives the company an orchestration mechanism so it could improve quality, speed, flexibility, and consistency without requiring an organ transplant of these highly complex legacy systems. Many retailers face the challenge of dealing with brick and mortar, Web, and reseller channels. They all need to be knitted together into a virtual enterprise experience that is consistent for their customers. When a large U.K. grocer with a strong brick and mortar retail operation added an online business, they turned to Oracle Fusion DOO to bring these entities together. Disturbing the Peace with Acquisitions Quite often a company's ERP system is disrupted when it acquires a new company. An acquisition can inject a new set of processes and systems -- or even introduce an entirely new business like Sun's hardware did at Oracle. This challenge has been a driver for some of our DOO customers. A large power management company is using Oracle Fusion DOO to provide the flexibility to rapidly integrate additional products and services into its central fulfillment operation. The Flip Side of Fulfillment Meanwhile, we haven't ignored similar challenges on the supply side of the equation. Specifically, how to manage complex supply in a flexible way when there are multiple trading parties involved? How to manage the supply to suppliers? How to manage critical components that need to merge in a tier two or tier three supply chain? By investing in supply orchestration solutions for the virtual enterprise, we plan to give users better visibility into their network of suppliers to help them drive down costs. We also think this technology and full orchestration process can be applied to the financial side of organizations. An example is transactions that flow through complex internal structures to minimize tax exposure. We can help companies manage those transactions effectively by thinking about the internal organization as a virtual enterprise and bringing the same solution set to this internal challenge.  The Clear Front Runner No other company is investing in solving the virtual enterprise supply chain issues like Oracle is. Oracle is in a unique position to become the gold standard in this market space. We have the infrastructure of Oracle technology. We already have an Oracle Fusion DOO application which embraces the best of what's required in this area. And we're absolutely committed to extending our Fusion solution to other use cases and delivering even more business value.

    Read the article

  • elffile: ELF Specific File Identification Utility

    - by user9154181
    Solaris 11 has a new standard user level command, /usr/bin/elffile. elffile is a variant of the file utility that is focused exclusively on linker related files: ELF objects, archives, and runtime linker configuration files. All other files are simply identified as "non-ELF". The primary advantage of elffile over the existing file utility is in the area of archives — elffile examines the archive members and can produce a summary of the contents, or per-member details. The impetus to add elffile to Solaris came from the effort to extend the format of Solaris archives so that they could grow beyond their previous 32-bit file limits. That work introduced a new archive symbol table format. Now that there was more than one possible format, I thought it would be useful if the file utility could identify which format a given archive is using, leading me to extend the file utility: % cc -c ~/hello.c % ar r foo.a hello.o % file foo.a foo.a: current ar archive, 32-bit symbol table % ar r -S foo.a hello.o % file foo.a foo.a: current ar archive, 64-bit symbol table In turn, this caused me to think about all the things that I would like the file utility to be able to tell me about an archive. In particular, I'd like to be able to know what's inside without having to unpack it. The end result of that train of thought was elffile. Much of the discussion in this article is adapted from the PSARC case I filed for elffile in December 2010: PSARC 2010/432 elffile Why file Is No Good For Archives And Yet Should Not Be Fixed The standard /usr/bin/file utility is not very useful when applied to archives. When identifying an archive, a user typically wants to know 2 things: Is this an archive? Presupposing that the archive contains objects, which is by far the most common use for archives, what platform are the objects for? Are they for sparc or x86? 32 or 64-bit? Some confusing combination from varying platforms? The file utility provides a quick answer to question (1), as it identifies all archives as "current ar archive". It does nothing to answer the more interesting question (2). To answer that question, requires a multi-step process: Extract all archive members Use the file utility on the extracted files, examine the output for each file in turn, and compare the results to generate a suitable summary description. Remove the extracted files It should be easier and more efficient to answer such an obvious question. It would be reasonable to extend the file utility to examine archive contents in place and produce a description. However, there are several reasons why I decided not to do so: The correct design for this feature within the file utility would have file examine each archive member in turn, applying its full abilities to each member. This would be elegant, but also represents a rather dramatic redesign and re-implementation of file. Archives nearly always contain nothing but ELF objects for a single platform, so such generality in the file utility would be of little practical benefit. It is best to avoid adding new options to standard utilities for which other implementations of interest exist. In the case of the file utility, one concern is that we might add an option which later appears in the GNU version of file with a different and incompatible meaning. Indeed, there have been discussions about replacing the Solaris file with the GNU version in the past. This may or may not be desirable, and may or may not ever happen. Either way, I don't want to preclude it. Examining archive members is an O(n) operation, and can be relatively slow with large archives. The file utility is supposed to be a very fast operation. I decided that extending file in this way is overkill, and that an investment in the file utility for better archive support would not be worth the cost. A solution that is more narrowly focused on ELF and other linker related files is really all that we need. The necessary code for doing this already exists within libelf. All that is missing is a small user-level wrapper to make that functionality available at the command line. In that vein, I considered adding an option for this to the elfdump utility. I examined elfdump carefully, and even wrote a prototype implementation. The added code is small and simple, but the conceptual fit with the rest of elfdump is poor. The result complicates elfdump syntax and documentation, definite signs that this functionality does not belong there. And so, I added this functionality as a new user level command. The elffile Command The syntax for this new command is elffile [-s basic | detail | summary] filename... Please see the elffile(1) manpage for additional details. To demonstrate how output from elffile looks, I will use the following files: FileDescription configA runtime linker configuration file produced with crle dwarf.oAn ELF object /etc/passwdA text file mixed.aArchive containing a mixture of ELF and non-ELF members mixed_elf.aArchive containing ELF objects for different machines not_elf.aArchive containing no ELF objects same_elf.aArchive containing a collection of ELF objects for the same machine. This is the most common type of archive. The file utility identifies these files as follows: % file config dwarf.o /etc/passwd mixed.a mixed_elf.a not_elf.a same_elf.a config: Runtime Linking Configuration 64-bit MSB SPARCV9 dwarf.o: ELF 64-bit LSB relocatable AMD64 Version 1 /etc/passwd: ascii text mixed.a: current ar archive, 32-bit symbol table mixed_elf.a: current ar archive, 32-bit symbol table not_elf.a: current ar archive same_elf.a: current ar archive, 32-bit symbol table By default, elffile uses its "summary" output style. This output differs from the output from the file utility in 2 significant ways: Files that are not an ELF object, archive, or runtime linker configuration file are identified as "non-ELF", whereas the file utility attempts further identification for such files. When applied to an archive, the elffile output includes a description of the archive's contents, without requiring member extraction or other additional steps. Applying elffile to the above files: % elffile config dwarf.o /etc/passwd mixed.a mixed_elf.a not_elf.a same_elf.a config: Runtime Linking Configuration 64-bit MSB SPARCV9 dwarf.o: ELF 64-bit LSB relocatable AMD64 Version 1 /etc/passwd: non-ELF mixed.a: current ar archive, 32-bit symbol table, mixed ELF and non-ELF content mixed_elf.a: current ar archive, 32-bit symbol table, mixed ELF content not_elf.a: current ar archive, non-ELF content same_elf.a: current ar archive, 32-bit symbol table, ELF 64-bit LSB relocatable AMD64 Version 1 The output for same_elf.a is of particular interest: The vast majority of archives contain only ELF objects for a single platform, and in this case, the default output from elffile answers both of the questions about archives posed at the beginning of this discussion, in a single efficient step. This makes elffile considerably more useful than file, within the realm of linker-related files. elffile can produce output in two other styles, "basic", and "detail". The basic style produces output that is the same as that from 'file', for linker-related files. The detail style produces per-member identification of archive contents. This can be useful when the archive contents are not homogeneous ELF object, and more information is desired than the summary output provides: % elffile -s detail mixed.a mixed.a: current ar archive, 32-bit symbol table mixed.a(dwarf.o): ELF 32-bit LSB relocatable 80386 Version 1 mixed.a(main.c): non-ELF content mixed.a(main.o): ELF 64-bit LSB relocatable AMD64 Version 1 [SSE]

    Read the article

  • parallel_for_each from amp.h – part 1

    - by Daniel Moth
    This posts assumes that you've read my other C++ AMP posts on index<N> and extent<N>, as well as about the restrict modifier. It also assumes you are familiar with C++ lambdas (if not, follow my links to C++ documentation). Basic structure and parameters Now we are ready for part 1 of the description of the new overload for the concurrency::parallel_for_each function. The basic new parallel_for_each method signature returns void and accepts two parameters: a grid<N> (think of it as an alias to extent) a restrict(direct3d) lambda, whose signature is such that it returns void and accepts an index of the same rank as the grid So it looks something like this (with generous returns for more palatable formatting) assuming we are dealing with a 2-dimensional space: // some_code_A parallel_for_each( g, // g is of type grid<2> [ ](index<2> idx) restrict(direct3d) { // kernel code } ); // some_code_B The parallel_for_each will execute the body of the lambda (which must have the restrict modifier), on the GPU. We also call the lambda body the "kernel". The kernel will be executed multiple times, once per scheduled GPU thread. The only difference in each execution is the value of the index object (aka as the GPU thread ID in this context) that gets passed to your kernel code. The number of GPU threads (and the values of each index) is determined by the grid object you pass, as described next. You know that grid is simply a wrapper on extent. In this context, one way to think about it is that the extent generates a number of index objects. So for the example above, if your grid was setup by some_code_A as follows: extent<2> e(2,3); grid<2> g(e); ...then given that: e.size()==6, e[0]==2, and e[1]=3 ...the six index<2> objects it generates (and hence the values that your lambda would receive) are:    (0,0) (1,0) (0,1) (1,1) (0,2) (1,2) So what the above means is that the lambda body with the algorithm that you wrote will get executed 6 times and the index<2> object you receive each time will have one of the values just listed above (of course, each one will only appear once, the order is indeterminate, and they are likely to call your code at the same exact time). Obviously, in real GPU programming, you'd typically be scheduling thousands if not millions of threads, not just 6. If you've been following along you should be thinking: "that is all fine and makes sense, but what can I do in the kernel since I passed nothing else meaningful to it, and it is not returning any values out to me?" Passing data in and out It is a good question, and in data parallel algorithms indeed you typically want to pass some data in, perform some operation, and then typically return some results out. The way you pass data into the kernel, is by capturing variables in the lambda (again, if you are not familiar with them, follow the links about C++ lambdas), and the way you use data after the kernel is done executing is simply by using those same variables. In the example above, the lambda was written in a fairly useless way with an empty capture list: [ ](index<2> idx) restrict(direct3d), where the empty square brackets means that no variables were captured. If instead I write it like this [&](index<2> idx) restrict(direct3d), then all variables in the some_code_A region are made available to the lambda by reference, but as soon as I try to use any of those variables in the lambda, I will receive a compiler error. This has to do with one of the direct3d restrictions, where only one type can be capture by reference: objects of the new concurrency::array class that I'll introduce in the next post (suffice for now to think of it as a container of data). If I write the lambda line like this [=](index<2> idx) restrict(direct3d), all variables in the some_code_A region are made available to the lambda by value. This works for some types (e.g. an integer), but not for all, as per the restrictions for direct3d. In particular, no useful data classes work except for one new type we introduce with C++ AMP: objects of the new concurrency::array_view class, that I'll introduce in the post after next. Also note that if you capture some variable by value, you could use it as input to your algorithm, but you wouldn’t be able to observe changes to it after the parallel_for_each call (e.g. in some_code_B region since it was passed by value) – the exception to this rule is the array_view since (as we'll see in a future post) it is a wrapper for data, not a container. Finally, for completeness, you can write your lambda, e.g. like this [av, &ar](index<2> idx) restrict(direct3d) where av is a variable of type array_view and ar is a variable of type array - the point being you can be very specific about what variables you capture and how. So it looks like from a large data perspective you can only capture array and array_view objects in the lambda (that is how you pass data to your kernel) and then use the many threads that call your code (each with a unique index) to perform some operation. You can also capture some limited types by value, as input only. When the last thread completes execution of your lambda, the data in the array_view or array are ready to be used in the some_code_B region. We'll talk more about all this in future posts… (a)synchronous Please note that the parallel_for_each executes as if synchronous to the calling code, but in reality, it is asynchronous. I.e. once the parallel_for_each call is made and the kernel has been passed to the runtime, the some_code_B region continues to execute immediately by the CPU thread, while in parallel the kernel is executed by the GPU threads. However, if you try to access the (array or array_view) data that you captured in the lambda in the some_code_B region, your code will block until the results become available. Hence the correct statement: the parallel_for_each is as-if synchronous in terms of visible side-effects, but asynchronous in reality.   That's all for now, we'll revisit the parallel_for_each description, once we introduce properly array and array_view – coming next. Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • SOA Suite Integration: Part 3: Loading files

    - by Anthony Shorten
    One of the most common scenarios in SOA Integration is the loading of a file into the product from an external source. In Oracle SOA Suite there is a File Adapter that can process many file types into your BPEL process. For this example I will use the File Adapter to load a file of user and emails to update the user object within the Oracle Utilities Application Framework. Remember you can repeat this process with other objects and other file types. Again I am illustrating the ease of integration. The first thing is to create an empty BPEL process that will hold our flow. In Oracle JDeveloper this can be achieved by specifying the Define Service Later template (as other templates have predefined inputs and outputs and in this case we want to specify those). So I will create simpleFileLoad process to house our process. You will start with an empty canvas so you need to first specify the load part of the process using the File Adapter. Select the File Adapter from the Component Palette under BPEL Services and drag and drop it to the left side Partner Links (left is input). You name the Service. In this case I chose LoadFile. Press Next. We will define the interface as part of the wizard so select Define from operation and schema (specified later). Press Next. We are going to choose Read File to denote that we will read the file and specify the default Operation Name as Read. Press Next. The next step is to tell the Adapter the location of the files, how to process them and what to do with them after they have been processed. I am using hardcoded locations in this example but you can have logical locations as well. Press Next. I am now going to tell the adapter how to recognize the files I want to load. In my case I am using CSV files and more importantly I am tell the adapter to run the process for each record in the file it encounters. Press Next. Now, I tell the adapter how often I want to poll for the files. I have taken the defaults. Press Next. At this stage I have no explanation of the format of the input. So I am going to invoke the Native Format Wizard which will guide me through the process of creating the file input format. Clicking the purple cog icon will start the wizard. After an introduction screen (not shown), you specify the format of the input file. The File Adapter supports multiple format types. For this example, I will use Delimited as I am going to load a CSV file. Press Next. The best way for the wizard to work is with a sample. I have a sample file and the wizard will ask how much of the file to use as a template. I will use the defaults. Note: If you are using a language that has other languages other than US-ASCII, it is at this point you specify the character set to use.  Press Next. The sample contains multiple instances of a single record type. The wizard supports complex types as well. We will use the appropriate setting for our file. Press Next. You have to specify the file element and the record element. This will be used by the input wizard to translate the CSV data into an XML structure (this will make sense later). I am using LoadUsers as my file delimiter (root element) and User Record as my record root element. Press Next. As the file is CSV the delimiter is "," so I will also specify that the End Of Line (EOL) indicator indicates the end of a record. Press Next. Up until this point your have not given the columns their names. In my case my sample includes the column names in the first record. This is not always the case but you can specify the names and formats of columns in this dialog (not shown). Press Next. The wizard now generates the schema for the input file. You can specify a name for the schema. I have used userupdate.xsd. We want to verify the schema so press Test. You can test the schema by specifying an input sample. and pressing the green play button. You will see the delimiters you specified earlier for the file and the records. Press Ok to continue. A confirmation screen will be displayed showing you the location of the schema in your project. Press Finish to return to the File Adapter configuration. You will now see the schema and elements prepopulated from the wizard. Press Next. The File Adapter configuration is now complete. Press Finish. Now you need to receive the input from the LoadFile component so we need to place a Receive node in the BPEL process by drag and dropping the Receive component from the Component Palette under BPEL Constructs onto the BPEL process. We link the receive process with the LoadFile component by dragging the left most connect node of the Receive node to the LoadFile component. Once the link is established you need to name the Receive node appropriately and as in the post of the last part of this series you need to generate input variables for the BPEL process to hold the input records in. You need to now add the product Web Service. The process is the same as described in the post of the last part of this series. You drop the Web Service BPEL Service onto the right side of the process and fill in the details of the WSDL URL . You also have to add an Invoke node to call the service and generate the input and outputs variables for the call in the Invoke node. Now, to get the inputs from File to the service. You have to use a Transform (you can use an Assign action but a Transform action is more flexible). You drag and drop the Transform component from the Component Palette under Oracle Extensions and place it between the Receive and Invoke nodes. We name the Transform Node, Mapper File and associate the source of the mapping the schema from the Receive node and the output will be the input variable from the Invoke node. We now build the transform. We first map the user and email attributes by drag and drop the elements from the left to the right. The reason we needed to use the transform is that we will be telling the AS-User service that we want to issue an update action. Remember when we registered the service we actually used Read as the default. If we do not otherwise inform the service to use the Update action it will use the Read action instead (which is not desired). To specify the update action you need to click on the transactionType node on the right and select Set Text to set the action. You need to specify the transactionType of UPD (for update). The mapping is now complete. The final BPEL process is ready for deployment. You then deploy the BPEL process to the server and to test the service by simply dropping a file, in the same pattern/name as you specified, in the directory you specified in the File Adapter. You will see each record as a separate instance entry in the Fusion Middleware Control console. You can now load files into the product. You can repeat this process for each type of file to process. While this was a simple example it illustrates the method of loading data can be achieved using SOA Suite in conjunction with our products.

    Read the article

  • SSIS - Range lookups

    - by Repieter
      When developing an ETL solution in SSIS we sometimes need to do range lookups in SSIS. Several solutions for this can be found on the internet, but now we have built another solution which I would like to share, since it's pretty easy to implement and the performance is fast.   You can download the sample package to see how it works. Make sure you have the AdventureWorks2008R2 and AdventureWorksDW2008R2 databases installed. (Apologies for the layout of this blog, I don't do this too often :))   To give a little bit more information about the example, this is basically what is does: we load a facttable and do an SCD type 2 lookup operation of the Product dimension. This is done with a script component.   First we query the Data warehouse to create the lookup dataset. The query that is used for that is:   SELECT     [ProductKey]     ,[ProductAlternateKey]     ,[StartDate]     ,ISNULL([EndDate], '9999-01-01') AS EndDate FROM [DimProduct]     The output of this query is stored in a DataTable:     string lookupQuery = @"                         SELECT                             [ProductKey]                             ,[ProductAlternateKey]                             ,[StartDate]                             ,ISNULL([EndDate], '9999-01-01') AS EndDate                         FROM [DimProduct]";           OleDbCommand oleDbCommand = new OleDbCommand(lookupQuery, _oleDbConnection);         OleDbDataAdapter adapter = new OleDbDataAdapter(oleDbCommand);           _dataTable = new DataTable();         adapter.Fill(_dataTable);     Now that the dimension data is stored in the DataTable we use the following method to do the actual lookup:   public int RangeLookup(string businessKey, DateTime lookupDate)     {         // set default return value (Unknown)         int result = -1;           DataRow[] filteredRows;         filteredRows = _dataTable.Select(string.Format("ProductAlternateKey = '{0}'", businessKey));           for (int i = 0; i < filteredRows.Length; i++)         {             // check if the lookupdate is found between the startdate and enddate of any of the records             if (lookupDate >= (DateTime)filteredRows[i][2] && lookupDate < (DateTime)filteredRows[i][3])             {                 result = (filteredRows[i][0] == null) ? -1 : (int)filteredRows[i][0];                 break;             }         }           filteredRows = null;           return result;     }       This method is executed for every row that passes the script component. This is implemented in the ProcessInputRow method   public override void Input0_ProcessInputRow(Input0Buffer Row)     {         // Perform the lookup operation on the current row and put the value in the Surrogate Key Attribute         Row.ProductKey = RangeLookup(Row.ProductNumber, Row.OrderDate);     }   Now what actually happens?!   1. Every record passes the business key and the orderdate to the RangeLookup method. 2. The DataTable is then filtered on the business key of the current record. The output is stored in a DataRow [] object. 3. We loop over the DataRow[] object to see where the orderdate meets the following expression: (lookupDate >= (DateTime)filteredRows[i][2] && lookupDate < (DateTime)filteredRows[i][3]) 4. When the expression returns true (so where the data is between the Startdate and the EndDate), the surrogate key of the dimension record is returned   We have done some testing with this solution and it works great for us. Hope others can use this example to do their range lookups.

    Read the article

  • JBoss 7.1.1 + Spring 3.1 + JPA 2 (Hibernate 3.6.8) > Cannot parse beans.xml

    - by Mashrur
    Trying to deploy a JSF 2 app with Spring 3.1 + JPA 2 (Hibernte 3.6.8) into JBoss 7.1.1 AS. The app was working fine on tomcat 7. Now, I have already added and changed some configurations. Added jboss-deployment-structure.xml <jboss-deployment-structure xmlns="urn:jboss:deployment-structure:1.1"> <deployment> <exclusions> <module name="org.apache.log4j" /> <module name="javax.faces.api" slot="main"/> <module name="com.sun.jsf-impl" slot="main"/> <module name="org.hibernate"/> <module name="org.javassist" /> <module name="javaee.api" /> <module name="org.hibernate.validator" /> <module name="org.jboss.as.web" /> <module name="org.jboss.logging" /> <module name="javax.persistence.api" /> <module name="org.jboss.interceptor" /> </exclusions> </deployment> </jboss-deployment-structure> 2. Inside web.xml added these lines: <persistence-unit-ref> <persistence-unit-ref-name>persistence/persistenceUnit</persistence-unit-ref-name> <persistence-unit-name>persistenceUnit</persistence-unit-name> </persistence-unit-ref> 3. Inside applicationContext.xml, changed bean definition for entityManagerFactory by adding this property <property name="persistenceXmlLocation" value="classpath*:META-INF/persistence.xml"/> Now, do I still need to do something more than these which is obvious to you? Because, while I am trying to deploy it from eclipse indigo SR2, getting this 14:10:32,046 INFO [org.jboss.modules] JBoss Modules version 1.1.1.GA 14:10:33,054 INFO [org.jboss.msc] JBoss MSC version 1.0.2.GA 14:10:33,200 INFO [org.jboss.as] JBAS015899: JBoss AS 7.1.1.Final "Brontes" starting 14:10:36,375 INFO [org.xnio] XNIO Version 3.0.3.GA 14:10:36,384 INFO [org.jboss.as.server] JBAS015888: Creating http management service using socket-binding (management-http) 14:10:36,432 INFO [org.xnio.nio] XNIO NIO Implementation Version 3.0.3.GA 14:10:36,462 INFO [org.jboss.remoting] JBoss Remoting version 3.2.3.GA 14:10:36,587 INFO [org.jboss.as.logging] JBAS011502: Removing bootstrap log handlers 14:10:36,714 INFO [org.jboss.as.security] (ServerService Thread Pool -- 44) JBAS013101: Activating Security Subsystem 14:10:36,735 INFO [org.jboss.as.naming] (MSC service thread 1-2) JBAS011802: Starting Naming Service 14:10:37,043 INFO [org.jboss.as.mail.extension] (MSC service thread 1-6) JBAS015400: Bound mail session [java:jboss/mail/Default] 14:10:37,208 INFO [org.jboss.as.connector] (MSC service thread 1-7) JBAS010408: Starting JCA Subsystem (JBoss IronJacamar 1.0.9.Final) 14:10:37,295 INFO [org.jboss.as.security] (MSC service thread 1-7) JBAS013100: Current PicketBox version=4.0.7.Final 14:10:37,740 INFO [org.jboss.as.connector.subsystems.datasources] (ServerService Thread Pool -- 27) JBAS010403: Deploying JDBC-compliant driver class org.h2.Driver (version 1.3) 14:10:38,792 INFO [org.jboss.ws.common.management.AbstractServerConfig] (MSC service thread 1-3) JBoss Web Services - Stack CXF Server 4.0.2.GA 14:10:38,833 INFO [org.apache.coyote.http11.Http11Protocol] (MSC service thread 1-2) Starting Coyote HTTP/1.1 on http-localhost-127.0.0.1-8080 14:10:39,534 INFO [org.jboss.as.server.deployment.scanner] (MSC service thread 1-1) JBAS015012: Started FileSystemDeploymentService for directory F:\work\softwares\Application Servers\JBoss\jboss-as-7.1.1.Final\jboss-as-7.1.1.Final\standalone\deployments 14:10:39,618 INFO [org.jboss.as.remoting] (MSC service thread 1-3) JBAS017100: Listening on localhost/127.0.0.1:4447 14:10:39,623 INFO [org.jboss.as.remoting] (MSC service thread 1-2) JBAS017100: Listening on /127.0.0.1:9999 14:10:39,698 INFO [org.jboss.as.server.deployment.scanner] (DeploymentScanner-threads - 1) JBAS015014: Re-attempting failed deployment treasury.war 14:10:39,706 INFO [org.jboss.as.server.deployment.scanner] (DeploymentScanner-threads - 1) JBAS015003: Found com.misl.treasury.ui.war in deployment directory. To trigger deployment create a file called com.misl.treasury.ui.war.dodeploy 14:10:40,105 INFO [org.jboss.as.connector.subsystems.datasources] (MSC service thread 1-2) JBAS010400: Bound data source [java:jboss/datasources/ExampleDS] 14:10:40,399 INFO [org.jboss.as.server.deployment] (MSC service thread 1-1) JBAS015876: Starting deployment of "com.misl.treasury.ui.war" 14:10:40,405 INFO [org.jboss.as.server.deployment] (MSC service thread 1-7) JBAS015876: Starting deployment of "treasury.war" 14:10:55,283 WARN [org.jboss.as.server.deployment] (MSC service thread 1-7) JBAS015893: Encountered invalid class name 'com.sun.faces.vendor.Tomcat6InjectionProvider:org.apache.catalina.util.DefaultAnnotationProcessor' for service type 'com.sun.faces.spi.injectionprovider' 14:10:55,289 WARN [org.jboss.as.server.deployment] (MSC service thread 1-7) JBAS015893: Encountered invalid class name 'com.sun.faces.vendor.Jetty6InjectionProvider:org.mortbay.jetty.plus.annotation.InjectionCollection' for service type 'com.sun.faces.spi.injectionprovider' 14:10:55,428 INFO [org.jboss.as.jpa] (MSC service thread 1-7) JBAS011401: Read persistence.xml for persistenceUnit 14:10:55,843 WARN [org.jboss.as.server.deployment] (MSC service thread 1-4) JBAS015893: Encountered invalid class name 'com.sun.faces.vendor.Tomcat6InjectionProvider:org.apache.catalina.util.DefaultAnnotationProcessor' for service type 'com.sun.faces.spi.injectionprovider' 14:10:55,849 WARN [org.jboss.as.server.deployment] (MSC service thread 1-4) JBAS015893: Encountered invalid class name 'com.sun.faces.vendor.Jetty6InjectionProvider:org.mortbay.jetty.plus.annotation.InjectionCollection' for service type 'com.sun.faces.spi.injectionprovider' 14:10:56,010 ERROR [org.jboss.msc.service.fail] (MSC service thread 1-2) MSC00001: Failed to start service jboss.deployment.unit."com.misl.treasury.ui.war".POST_MODULE: org.jboss.msc.service.StartException in service jboss.deployment.unit."com.misl.treasury.ui.war".POST_MODULE: Failed to process phase POST_MODULE of deployment "com.misl.treasury.ui.war" at org.jboss.as.server.deployment.DeploymentUnitPhaseService.start(DeploymentUnitPhaseService.java:119) [jboss-as-server-7.1.1.Final.jar:7.1.1.Final] at org.jboss.msc.service.ServiceControllerImpl$StartTask.startService(ServiceControllerImpl.java:1811) [jboss-msc-1.0.2.GA.jar:1.0.2.GA] at org.jboss.msc.service.ServiceControllerImpl$StartTask.run(ServiceControllerImpl.java:1746) [jboss-msc-1.0.2.GA.jar:1.0.2.GA] at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) [rt.jar:1.6.0_23] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) [rt.jar:1.6.0_23] at java.lang.Thread.run(Thread.java:662) [rt.jar:1.6.0_23] Caused by: java.lang.NoClassDefFoundError: javax/servlet/ServletOutputStream at java.lang.Class.getDeclaredConstructors0(Native Method) [rt.jar:1.6.0_23] at java.lang.Class.privateGetDeclaredConstructors(Class.java:2389) [rt.jar:1.6.0_23] at java.lang.Class.getConstructor0(Class.java:2699) [rt.jar:1.6.0_23] at java.lang.Class.getConstructor(Class.java:1657) [rt.jar:1.6.0_23] at org.jboss.as.web.deployment.jsf.JsfManagedBeanProcessor.deploy(JsfManagedBeanProcessor.java:108) at org.jboss.as.server.deployment.DeploymentUnitPhaseService.start(DeploymentUnitPhaseService.java:113) [jboss-as-server-7.1.1.Final.jar:7.1.1.Final] ... 5 more Caused by: java.lang.ClassNotFoundException: javax.servlet.ServletOutputStream from [Module "deployment.com.misl.treasury.ui.war:main" from Service Module Loader] at org.jboss.modules.ModuleClassLoader.findClass(ModuleClassLoader.java:190) at org.jboss.modules.ConcurrentClassLoader.performLoadClassUnchecked(ConcurrentClassLoader.java:468) at org.jboss.modules.ConcurrentClassLoader.performLoadClassChecked(ConcurrentClassLoader.java:456) at org.jboss.modules.ConcurrentClassLoader.performLoadClassChecked(ConcurrentClassLoader.java:423) at org.jboss.modules.ConcurrentClassLoader.performLoadClass(ConcurrentClassLoader.java:398) at org.jboss.modules.ConcurrentClassLoader.loadClass(ConcurrentClassLoader.java:120) ... 11 more 14:10:56,628 ERROR [org.jboss.msc.service.fail] (MSC service thread 1-4) MSC00001: Failed to start service jboss.deployment.unit."treasury.war".PARSE: org.jboss.msc.service.StartException in service jboss.deployment.unit."treasury.war".PARSE: Failed to process phase PARSE of deployment "treasury.war" at org.jboss.as.server.deployment.DeploymentUnitPhaseService.start(DeploymentUnitPhaseService.java:119) [jboss-as-server-7.1.1.Final.jar:7.1.1.Final] at org.jboss.msc.service.ServiceControllerImpl$StartTask.startService(ServiceControllerImpl.java:1811) [jboss-msc-1.0.2.GA.jar:1.0.2.GA] at org.jboss.msc.service.ServiceControllerImpl$StartTask.run(ServiceControllerImpl.java:1746) [jboss-msc-1.0.2.GA.jar:1.0.2.GA] at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) [rt.jar:1.6.0_23] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) [rt.jar:1.6.0_23] at java.lang.Thread.run(Thread.java:662) [rt.jar:1.6.0_23] Caused by: org.jboss.as.server.deployment.DeploymentUnitProcessingException: SAXException parsing vfs:/F:/work/softwares/Application Servers/JBoss/jboss-as-7.1.1.Final/jboss-as-7.1.1.Final/bin/content/treasury.war/WEB-INF/beans.xml at org.jboss.as.weld.deployment.BeansXmlParser.parse(BeansXmlParser.java:100) at org.jboss.as.weld.deployment.processors.BeansXmlProcessor.parseBeansXml(BeansXmlProcessor.java:133) at org.jboss.as.weld.deployment.processors.BeansXmlProcessor.deploy(BeansXmlProcessor.java:97) at org.jboss.as.server.deployment.DeploymentUnitPhaseService.start(DeploymentUnitPhaseService.java:113) [jboss-as-server-7.1.1.Final.jar:7.1.1.Final] ... 5 more Caused by: org.xml.sax.SAXParseException: Premature end of file. at org.apache.xerces.util.ErrorHandlerWrapper.createSAXParseException(ErrorHandlerWrapper.java:196) at org.apache.xerces.util.ErrorHandlerWrapper.fatalError(ErrorHandlerWrapper.java:175) at org.apache.xerces.impl.XMLErrorReporter.reportError(XMLErrorReporter.java:394) at org.apache.xerces.impl.XMLErrorReporter.reportError(XMLErrorReporter.java:322) at org.apache.xerces.impl.XMLErrorReporter.reportError(XMLErrorReporter.java:281) at org.apache.xerces.impl.XMLScanner.reportFatalError(XMLScanner.java:1459) at org.apache.xerces.impl.XMLDocumentScannerImpl$PrologDispatcher.dispatch(XMLDocumentScannerImpl.java:903) at org.apache.xerces.impl.XMLDocumentFragmentScannerImpl.scanDocument(XMLDocumentFragmentScannerImpl.java:324) at org.apache.xerces.parsers.XML11Configuration.parse(XML11Configuration.java:845) at org.apache.xerces.parsers.XML11Configuration.parse(XML11Configuration.java:768) at org.apache.xerces.parsers.XMLParser.parse(XMLParser.java:108) at org.apache.xerces.parsers.AbstractSAXParser.parse(AbstractSAXParser.java:1196) at org.apache.xerces.jaxp.SAXParserImpl$JAXPSAXParser.parse(SAXParserImpl.java:555) at org.apache.xerces.jaxp.SAXParserImpl.parse(SAXParserImpl.java:289) at org.jboss.as.weld.deployment.BeansXmlParser.parse(BeansXmlParser.java:94) ... 8 more 14:10:56,670 INFO [org.jboss.as] (MSC service thread 1-1) JBAS015951: Admin console listening on http://127.0.0.1:9990 14:10:56,672 ERROR [org.jboss.as] (MSC service thread 1-1) JBAS015875: JBoss AS 7.1.1.Final "Brontes" started (with errors) in 25527ms - Started 143 of 222 services (2 services failed or missing dependencies, 76 services are passive or on-demand) 14:10:56,671 INFO [org.jboss.as.server] (DeploymentScanner-threads - 2) JBAS015871: Deploy of deployment "treasury.war" was rolled back with no failure message 14:10:56,679 INFO [org.jboss.as.server] (DeploymentScanner-threads - 2) JBAS015870: Deploy of deployment "com.misl.treasury.ui.war" was rolled back with failure message {"JBAS014671: Failed services" => {"jboss.deployment.unit.\"com.misl.treasury.ui.war\".POST_MODULE" => "org.jboss.msc.service.StartException in service jboss.deployment.unit.\"com.misl.treasury.ui.war\".POST_MODULE: Failed to process phase POST_MODULE of deployment \"com.misl.treasury.ui.war\""}} 14:10:56,851 INFO [org.jboss.as.server.deployment] (MSC service thread 1-8) JBAS015877: Stopped deployment com.misl.treasury.ui.war in 172ms 14:10:57,068 INFO [org.jboss.as.server.deployment] (MSC service thread 1-7) JBAS015877: Stopped deployment treasury.war in 395ms 14:10:57,070 INFO [org.jboss.as.controller] (DeploymentScanner-threads - 2) JBAS014774: Service status report JBAS014777: Services which failed to start: service jboss.deployment.unit."treasury.war".PARSE: org.jboss.msc.service.StartException in service jboss.deployment.unit."treasury.war".PARSE: Failed to process phase PARSE of deployment "treasury.war" service jboss.deployment.unit."com.misl.treasury.ui.war".POST_MODULE: org.jboss.msc.service.StartException in service jboss.deployment.unit."com.misl.treasury.ui.war".POST_MODULE: Failed to process phase POST_MODULE of deployment "com.misl.treasury.ui.war" 14:10:57,079 ERROR [org.jboss.as.server.deployment.scanner] (DeploymentScanner-threads - 1) {"JBAS014653: Composite operation failed and was rolled back. Steps that failed:" => {"Operation step-2" => {"JBAS014671: Failed services" => {"jboss.deployment.unit.\"com.misl.treasury.ui.war\".POST_MODULE" => "org.jboss.msc.service.StartException in service jboss.deployment.unit.\"com.misl.treasury.ui.war\".POST_MODULE: Failed to process phase POST_MODULE of deployment \"com.misl.treasury.ui.war\""}}}} 14:10:57,087 ERROR [org.jboss.as.server.deployment.scanner] (DeploymentScanner-threads - 1) JBAS014654: Composite operation was rolled back And by the way, JBOSS_HOME\modules\javax\api\main folder only contains a module.xml, no jars. Tried to add jboss-servlet-api_3.0_spec-1.0.0.Final.jar file in that directory and updated module.xml too. But, it shows a very long trail of stacktraces :) I have even removed beans.xml. No change. I have never tried JBoss before. Any help would be highly appreciated.

    Read the article

  • Capturing image with WDS is stuck on 'Capturing Windows Image Metadata'

    - by user74499
    Hello, I'm trying to capture a rather large (100gigs) windows xp partition to a WIM file on an attached USB hard drive. Under 'Task Progress' it's saying 'Capturing Windows image Metadata', which is where it has been for a while (like 1.5 hours) - the blue bar is at the end of the screen, i.e. 100% I can move the windows around the screen so I suspect that the operation hasn't crashed yet but does this part of the process take a long time? I have only ever captured a 3gig partition before. Thanks.

    Read the article

  • Windows 2008 DFS Replication Issue

    - by e0594cn
    We have two Server 2008 R2 servers running DFS-R (named dfs01 and dfs02) in a 2008 R2 Domain. Today I found the files in server dfs01 can not be replicated to dfs02. So I used the command dfsrdiag backlog /rgname:<group> /rfname:<folder> /sendingmember:dfs01/receivingmember:dfs02 to check the backlog. After executing the command, I get the following error: Failed to execute GetVersionVector Method. Err: -2147217406 <0x80041002 operation Failed. How can I resolve this?

    Read the article

  • IIS7 - Lock Violation error, HTTP handlers, modules, and the <clear /> element

    - by Daniel Schaffer
    I have an ASP.NET site that uses its own set of HTTP handlers and does not need any modules. So, in IIS6, all I had to do was this in my web.config: <httpModules> <clear /> </httpModules> However, if I try to do the same in the system.webServer area for IIS7, I get a 500 error when I try to view the site, and in IIS manager when I try to view the handler mappings, I get a popup box with the message: There was an error while performing this operation Details: Filename: \?\C:\Sites\TheWebSiteGoesHere\web.config Line number: 39 Error: Lock violation Line 39 is where the <clear /> element is. Some googling led me to a solution involving running this command: %windir%\system32\inetsrv\appcmd.exe unlock config -section:system.webServer/modules ...but that did not solve the problem.

    Read the article

  • SFTP - Unable to Overwrite File - "EOF received from remote side"

    - by NateReid
    I am working with a customer to troubleshoot an error they have when trying to overwrite/"PUT" a file to our SFTP site. When the root directory is empty and they try to upload the file there is no problem but when they try to overwrite an existing file this is when the error occurs. The error they receive when trying a put command in Java Caps is: The error is: Batch SFTP eWay error when doing data transfer operation in [PUT()], message=[EOF received from remote side [Unknown cause]].|#] When they use WinSCP or FileZilla to put the file it overwrites fine with no errors. We have tried: Multiple different files Checking their SFTP user permissions Gave full access permissions to "everyone" to the user's root directory in Windows Recreating their user account Ensured no other processes are using/locking the files that are being overwriten We are using Cerberus Professional FTP server software. Any ideas of what else we could try?

    Read the article

  • Outlook, Delegates and IgnoreSOBError registry work-around

    - by Yougotiger
    We just moved to exchange/Outlook, and our users are trying to create delegations to their admin assistants. We're getting the following message: The Delegates settings were not saved correctly. Unable to activate send-on-behalf-of list. You do not have sufficient permission to perform this operation on this object. It seems the issue can be resolved with a registry entry as per the KB article: http://support.microsoft.com/kb/950794/en-us But I'm wondering if this is just a band-aid or if it gets to the root of the problem?

    Read the article

  • Batch file script for Enable & disable the "use automatic Configuration Script"

    - by Tijo Joy
    My intention is to create a .bat file that toggles the check box of "use automatic Configuration Script" in Internet Settings. The following is my script @echo OFF setlocal ENABLEEXTENSIONS set KEY_NAME="HKCU\Software\Microsoft\Windows\CurrentVersion\Internet Settings" set VALUE_NAME=AutoConfigURL FOR /F "usebackq skip=1 tokens=1-3" %%A IN (`REG QUERY %KEY_NAME% /v %VALUE_NAME% 2^>nul`) DO ( set ValueName=%%A set ValueType=%%B set ValueValue=%%C ) @echo Value Name = %ValueName% @echo Value Type = %ValueType% @echo Value Value = %ValueValue% IF NOT %ValueValue%==yyyy ( reg add "HKCU\Software\Microsoft\Windows\CurrentVersion\Internet Settings" /v AutoConfigURL /t REG_SZ /d "yyyy" /f echo Proxy Enabled ) else ( echo Hai reg add "HKCU\Software\Microsoft\Windows\CurrentVersion\Internet Settings" /v AutoConfigURL /t REG_SZ /d "" /f echo Proxy Disabled ) The output i'm getting for the Proxy Enabled part is Value Name = AutoConfigURL Value Type = REG_SZ **Value Value =yyyy** Hai The operation completed successfully. Proxy Disabled But the Proxy Enable part isn't working fine the output i get is : Value Name = AutoConfigURL Value Type = REG_SZ **Value Value =** ( was unexpected at this time. The variable "Value Value" is not getting set when we try to do the Proxy enable

    Read the article

  • Problems with SQL Server 2008 - "The client was unable to reuse a session with SPID 62, which had ..

    - by GrZeCh
    Hello, I'm having problems with my SQL Server 2008 installation (10.0.2531.0 - SP1 installed). It works as a database server for small hosting environment (about 500 sites). I'm getting errors like this: The client was unable to reuse a session with SPID 62, which had been reset for connection pooling. The failure ID is 29. This error may have been caused by an earlier operation failing. Check the error logs for failed operations immediately before this error message. in Windows event log and when I run this: SELECT * FROM sys.dm_os_performance_counters WHERE object_name = 'SQLServer:General Statistics' I see that one of counters looks a little odd: Logins/sec 429 Connection Reset/sec 163459 Logouts/sec 399 User Connections 30 Logical Connections 33 any ideas how to check what is causing this problem?

    Read the article

  • SOLVED:Bootloader isn't executable booting XEN PV Guest with virtual-manager

    - by user2284355
    I am going insane with an error I am encountering while trying to install a PV Guest of Debian Wheezy on a Ubuntu Server precise Xen default build with libvirt. The steps I take with virt-manager are the following: 1.Net install via: http://ftp.es.debian.org/debian/dists/stable/main/installer-amd64/ 2.Install process is flawless, installed via VNC over virt-manager 3.When the VM starts I get the following error: Error starting domain: POST operation failed: xend_post: error from xen daemon: (xend.err "Bootloader isn't executable") Most answers i have found on google say that I need to edit the VM's .cfg file and correct the path to pygrub but virt-manager does not seem to create this file (I have searched the entire drive with "find". Another detail is that virsh list --all shows no VMs (Not even dom0) while the command xm list shows all of them. Any help is much appreciated. EDIT: Connected remotely via virsh: virsh -c xen+ssh://user@ip dumpxml vmname Found line: /usr/bin/pygrub ln -s /usr/lib/xen-4.1/bin/pygrub /usr/bin/pygrub Now it works. If anyone can think of a better solution give me a shout. Cheers

    Read the article

  • How to get rid of Thunderbird error "[NONEXISTENT] Unknown Mailbox: Trash"

    - by endolith
    Every time I start Thunderbird, I get this error: The current operation on 'Trash' did not succeed. The mail server for account [email protected] responded: [NONEXISTENT] Unknown Mailbox: Trash (now in authenticated state) (Failure). I've figured out that it goes away if I create folders in my Gmail accounts called "[Imap]/Trash" and "[Imap]/Sent", but why are these necessary? Even if I tell Thunderbird to unsubscribe from them, it still notices when I remove the labels and starts popping up this error again. One account also complains if I remove the "Sent Items" label.

    Read the article

  • Reconfigure RAID on Dell PowerEdge T710

    - by Stefano Borini
    I have a Dell PowerEdge T710 under my feet at this very moment, with RedHat Enterprise Server 5.3. I have 6 1TB disks and two 500GB. parted reports two devices, one 500 GB and the other 4 TB. So I assume the RAID has been setup as mirror for two disks, and I assume as RAID 5 the remaining ones. I say "I assume" because it does not make sense. Having 6 disks in RAID 5, I should obtain a total space of 5 TB, not 4TB. It's not even RAID 10: I would end up with a 3 TB unit. How can I check and eventually modify the RAID array definition? In the Fujitsu Siemens I played with some time ago, at boot I had the chance to enter the controller BIOS, but here I don't see a clear way to perform this operation.

    Read the article

  • SBS 08 Backup fail

    - by Bastien974
    I'm trying to backup my SBS 08 (only C:) with Windows Server Backup. It fails a few minutes after it started : Backup started at '08/12/2009 1:27:23 PM' failed as Volume Shadow copy operation failed for backup volumes with following error code '2155348022'. Please rerun backup once issue is resolved. In the EventViewer i have lots of error : VSS : 12289 SQLVDI : 1 MSSQL$MICROSOFT##SSEE : 18210 MSSQL$MICROSOFT##SSEE : 3041 SQLWRITER : 24583 All VSS ans SQL services are started. I have WSUS 3.0, Exchange 07. I don't have any third party backup software running at the same time. Thanks for your help !

    Read the article

  • Windows 8.1 Enterprise Sysprep Error

    - by Anurag Shetti
    I am Trying to sysprep my WIndows 8.1 enterprise (MSDN) and i get the following errors I have upgraded the Windows 8 to windows 8.1 and the machine contains all the configuration for VS 2012 and rest Exact error Sysprep was not able to validate your windows installation Error msg line in log C:\Users\André>err 0x8007139f # as an HRESULT: Severity: FAILURE (1), FACILITY_WIN32 (0x7), Code 0x139f # for hex 0x139f / decimal 5023 ERROR_INVALID_STATE winerror.h # The group or resource is not in the correct state to # perform the requested operation. # 1 matches found for "0x8007139f" SYSPRP ActionPlatform::GetValue: Error from RegQueryValueEx on value SysprepMode under key HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\Setup\Sysprep; dwRet = 0x2 I have searched the HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\Setup\Sysprep But i coud'nt find anything for SYSprep mode The value for sysprep was (Value not set)

    Read the article

  • CSG operations on implicit surfaces with marching cubes [SOLVED]

    - by Mads Elvheim
    I render isosurfaces with marching cubes, (or perhaps marching squares as this is 2D) and I want to do set operations like set difference, intersection and union. I thought this was easy to implement, by simply choosing between two vertex scalars from two different implicit surfaces, but it is not. For my initial testing, I tried with two spheres circles, and the set operation difference. i.e A - B. One circle is moving and the other one is stationary. Here's the approach I tried when picking vertex scalars and when classifying corner vertices as inside or outside. The code is written in C++. OpenGL is used for rendering, but that's not important. Normal rendering without any CSG operations does give the expected result. void march(const vec2& cmin, //min x and y for the grid cell const vec2& cmax, //max x and y for the grid cell std::vector<vec2>& tri, float iso, float (*cmp1)(const vec2&), //distance from stationary circle float (*cmp2)(const vec2&) //distance from moving circle ) { unsigned int squareindex = 0; float scalar[4]; vec2 verts[8]; /* initial setup of the grid cell */ verts[0] = vec2(cmax.x, cmax.y); verts[2] = vec2(cmin.x, cmax.y); verts[4] = vec2(cmin.x, cmin.y); verts[6] = vec2(cmax.x, cmin.y); float s1,s2; /********************************** ********For-loop of interest****** *******Set difference between **** *******two implicit surfaces****** **********************************/ for(int i=0,j=0; i<4; ++i, j+=2){ s1 = cmp1(verts[j]); s2 = cmp2(verts[j]); if((s1 < iso)){ //if inside circle1 if((s2 < iso)){ //if inside circle2 scalar[i] = s2; //then set the scalar to the moving circle } else { scalar[i] = s1; //only inside circle1 squareindex |= (1<<i); //mark as inside } } else { scalar[i] = s1; //inside neither circle } } if(squareindex == 0) return; /* Usual interpolation between edge points to compute the new intersection points */ verts[1] = mix(iso, verts[0], verts[2], scalar[0], scalar[1]); verts[3] = mix(iso, verts[2], verts[4], scalar[1], scalar[2]); verts[5] = mix(iso, verts[4], verts[6], scalar[2], scalar[3]); verts[7] = mix(iso, verts[6], verts[0], scalar[3], scalar[0]); for(int i=0; i<10; ++i){ //10 = maxmimum 3 triangles, + one end token int index = triTable[squareindex][i]; //look up our indices for triangulation if(index == -1) break; tri.push_back(verts[index]); } } This gives me weird jaggies: It looks like the CSG operation is done without interpolation. It just "discards" the whole triangle. Do I need to interpolate in some other way, or combine the vertex scalar values? I'd love some help with this. A full testcase can be downloaded HERE EDIT: Basically, my implementation of marching squares works fine. It is my scalar field which is broken, and I wonder what the correct way would look like. Preferably I'm looking for a general approach to implement the three set operations I discussed above, for the usual primitives (circle, rectangle/square, plane)

    Read the article

  • Exchange migration to 2007 making Outlook 2003 unable to read meeting requests

    - by Kvad
    Hi, We are currently moving from Exchange 2003 to 2007 (8.2 build 176.2). We have encounted an issue with one user. In Outlook 2003 when getting a meeting request: "Can't open this item. Could not complete the operation. One or more parameter values are nto valid." The item cannot be previewed in the reading pane either. The item can be viewed in OWA and iPhone fine. I've tried with cache mode off and on. Different computers. Same issue. There are the following entries on the account: SMTP [email protected] [email protected] X400 C=AU;A= ;P=Company Name;O=Exchange;S=LastName;G=FirstName; I'm loathe to recreate the account. This will be an extreme last resort. Any ideas? Thanks in advance.

    Read the article

< Previous Page | 82 83 84 85 86 87 88 89 90 91 92 93  | Next Page >