Search Results

Search found 8494 results on 340 pages for 'movie sound creation'.

Page 198/340 | < Previous Page | 194 195 196 197 198 199 200 201 202 203 204 205  | Next Page >

  • Can ANYONE get _lockroot to work?

    - by webfac
    Hi Guys, I have the following code which ultimately loads a SWF into movieclip 'myloader' using a movie clip loader, code as follows: var myload:MovieClipLoader = new MovieClipLoader(); var listener:Object = new Object(); myload.addListener(listener); listener.onLoadStart = function(){ animcontainer.myloader._lockroot = true; trace("Started"); } listener.onLoadInit = function(){ animcontainer.myloader._lockroot = true; trace("finished and locked"); } listener.onLoadComplete = function(){ animcontainer.myloader._lockroot = true; } myload.loadClip(path, animcontainer.myloader); The swf I am loading has pause, rewind and play buttons that must be referencing _root as they work fine when played alone. Upon loading them into myloader they no longer work. Based on the above code surely the myloader clip should be locking as _root after the load is complete? I have Googled myself dry on this one, no luck. ANY help will be much appreciated, Thanks.

    Read the article

  • NSInvalidArgumentException – App terminates while i'm trying to copy a captured video to documents f

    - by ChrNis
    I'd like to try the wisdom of the crowds..because i'm frustrated right now. Thanks in advance. So here is my code: - (void)imagePickerController:(UIImagePickerController *)ipc didFinishPickingMediaWithInfo:(NSDictionary *)info{ NSLog(@"info: %@",info); NSString *newFilename = [NSString stringWithFormat:@"%@/%@.mov", [NSHomeDirectory() stringByAppendingPathComponent:@"Documents"], [NSString stringWithFormat:@"%d", (long)[[NSDate date] timeIntervalSince1970]]]; NSLog(@"newFilename: %@",newFilename); NSFileManager *filemgr = [NSFileManager defaultManager]; NSError *err; if ([filemgr copyItemAtPath:[info objectForKey:@"UIImagePickerControllerMediaURL"] toPath:newFilename error:&err] == YES) NSLog (@"Move successful"); else NSLog (@"Move failed"); and this is the log: 2010-05-16 18:19:01.975 erlkoenig[7099:307] info: { UIImagePickerControllerMediaType = "public.movie"; UIImagePickerControllerMediaURL = "file://localhost/private/var/mobile/Applications/BE25F9B5-2D08-4B59-8B62-D04DF7BB7E5B/tmp/-Tmp-/capture-T0x108cb0.tmp.8M81HU/capturedvideo.MOV"; } newFilename: /var/mobile/Applications/BE25F9B5-2D08-4B59-8B62-D04DF7BB7E5B/Documents/1274026741.mov [NSURL fileSystemRepresentation]: unrecognized selector sent to instance 0x1c1f90 *** Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: '-[NSURL fileSystemRepresentation]: unrecognized selector sent to instance 0x1c1f90

    Read the article

  • Has anybody here helped fake-create technical details for the movies / TV shows?

    - by none
    This is based on another SO question: Have you ever recognized any source code visible in a movie? where we were discussing some of the cheap tricks to add technical "details" to a scene (e.g. showing HTML/java script code or hexdumps and mockup GUIs). Which brought up the question: Has anybody here helped create "computer effects" (e.g. as a technical consultant) for movies/TV shows to add some pseudo-technical details to a scene? If you can, please do provide details - what exactly were you supposed to do, and what did you end up doing? Were your proud of your contribution to the final product? What was the pay like?

    Read the article

  • Embedded YouTube video slows down page load

    - by Tom S
    Hi, total newbie here - trying to figure out how to make a page load faster with 1 embedded YouTube video on it - a very modest page takes an extra 5 seconds to completely load with the YouTube player showing up. I'd either like the page to load first or to only load the video when user clicks on it - don't know how to do that.. Here is the YouTube video embed code: <object width="480" height="385"><param name="movie" value="http://www.youtube.com/v/kfZIIKVfJ1w&hl=en_US&fs=1&rel=0&hd=1"></param><param name="allowFullScreen" value="true"></param><param name="allowscriptaccess" value="always"></param><embed src="http://www.youtube.com/v/kfZIIKVfJ1w&hl=en_US&fs=1&rel=0&hd=0" type="application/x-shockwave-flash" allowscriptaccess="always" allowfullscreen="true" width="480" height="385"></embed></object> Thanks for your help!

    Read the article

  • Is possible to remove an event listener in actionscript 3 (flash) ??

    - by DomingoSL
    In the frame 1 of my movie, part of my code is: stage.addEventListener(Event.RESIZE, resizeHandler); It works all fine, but when i go to the frame 2 the listener is still there but the function resizeHandler is not anymore (and i dont want it). So the console output this: TypeError: Error #1009: Cannot access a property or method of a null object reference. at index_fla::MainTimeline/resizeHandler() at flash.events::EventDispatcher/dispatchEventFunction() at flash.events::EventDispatcher/dispatchEvent() at flash.display::Stage/dispatchEvent() at index_fla::MainTimeline/frame2() Is possible to remove the event listener on the frame 2? Thanks!!!

    Read the article

  • "Volkswagen Passat" Navigation

    - by joe_midi
    http://web.vw.com/all-new-passat-experience/ The navigation on this .swf is very impressive, and I was wondering how I would go about replicating it. The way each of the sections Design / Technology / Safety / Performance I want to know how each section can expand and still fit together. I've started with MOUSE_OVER events, but I'm not entirely sure how to order them so each section is the right size for every mouse over Event. This is the code I have for each "Section", I have created a movie clip with a Mask that expands on mouse over and brings the selected clip to the highest depth. this.firstButton01.addEventListener(MouseEvent.MOUSE_OVER, firstButton01OVER) function firstButton01OVER(e:Event):void { this.firstButton01.gotoAndPlay(2); this.setChildIndex(firstButton01,this.numChildren-1); }

    Read the article

  • AS3: Element stays on stage after manipulating the index(depth)

    - by o15a3d4l11s2
    Here is the problem I have: after I change the index of one movieclip using this code oldIndex=getChildIndex(DisplayObject(e.target)); setChildIndex(DisplayObject(e.target), numChildren - 1); when I give the object its old index setChildIndex(DisplayObject(e.target), oldIndex); and go to another frame of the movie, this element I have changed the index of stays on top of all elements on the new frame. My question is am I doing something wrong and if not, what can I do so that this element stays only in the frame it is placed.

    Read the article

  • ruby code inside quotes

    - by chief
    I would like to embed videos and have managed to to do so by manually coding the url in where needed. If my url is stored in <%= @vid.url %, how can I use that string for the value and src parameter? <object width="480" height="385"><param name="movie" value="http://www.youtube.com/videos/abc123"></param><param name="allowFullScreen" value="true"></param><param name="allowscriptaccess" value="always"></param><embed src="http://www.youtube.com/videos/abc123" type="application/x-shockwave-flash" allowscriptaccess="always" allowfullscreen="true" width="480" height="385"></embed></object>

    Read the article

  • Access Ruby on Rails 'public' directory without relative path

    - by huntca
    I have a flash object I wish to load and I belive the best place to store that asset is in the public directory. Suppose it's stored in public/flash, there must be a better way to path to the swf than what I've done below. Note the 'data' element, it has a relative path. def create_vnc_object haml_tag :object, :id => 'flash', :width => '100%', :height => '100%', :type => 'application/x-shockwave-flash', :data => '../../flash/flash.swf' do haml_tag :param, :name => 'movie', :value => '../../flash/flash.swf' end end Is there some rails variable that points to public?

    Read the article

  • What is Linq?

    - by Aamir Hasan
    The way data can be retrieved in .NET. LINQ provides a uniform way to retrieve data from any object that implements the IEnumerable<T> interface. With LINQ, arrays, collections, relational data, and XML are all potential data sources. Why LINQ?With LINQ, you can use the same syntax to retrieve data from any data source:var query = from e in employeeswhere e.id == 1select e.nameThe middle level represents the three main parts of the LINQ project: LINQ to Objects is an API that provides methods that represent a set of standard query operators (SQOs) to retrieve data from any object whose class implements the IEnumerable<T> interface. These queries are performed against in-memory data.LINQ to ADO.NET augments SQOs to work against relational data. It is composed of three parts.LINQ to SQL (formerly DLinq) is use to query relational databases such as Microsoft SQL Server. LINQ to DataSet supports queries by using ADO.NET data sets and data tables. LINQ to Entities is a Microsoft ORM solution, allowing developers to use Entities (an ADO.NET 3.0 feature) to declaratively specify the structure of business objects and use LINQ to query them. LINQ to XML (formerly XLinq) not only augments SQOs but also includes a host of XML-specific features for XML document creation and queries. What You Need to Use LINQLINQ is a combination of extensions to .NET languages and class libraries that support them. To use it, you’ll need the following: Obviously LINQ, which is available from the new Microsoft .NET Framework 3.5 that you can download at http://go.microsoft.com/?linkid=7755937.You can speed up your application development time with LINQ using Visual Studio 2008, which offers visual tools such as LINQ to SQL designer and the Intellisense  support with LINQ’s syntax.Optionally, you can download the Visual C# 2008 Expression Edition tool at www.microsoft.com/vstudio/express/download. It is the free edition of Visual Studio 2008 and offers a lot of LINQ support such as Intellisense and LINQ to SQL designer. To use LINQ to ADO.NET, you need SQL

    Read the article

  • Mapping a Vertex Buffer in DirectX11

    - by judeclarke
    I have a VertexBuffer that I am remapping on a per frame base for a bunch of quads that are constantly updated, sharing the same material\index buffer but have different width/heights. However, currently right now there is a really bad flicker on this geometry. Although it is flickering, the flicker looks correct. I know it is the vertex buffer mapping because if I recreate the entire VB then it will render fine. However, as an optimization I figured I would just remap it. Does anyone know what the problem is? The length (width, size) of the vertex buffer is always the same. One might think it is double buffering, however, it would not be double buffering because it only happens when I map/unmap the buffer, so that leads me to believe that I am setting some parameters wrong on the creation or mapping. I am using DirectX11, my initialization and remap code are: Initialization code D3D11_BUFFER_DESC bd; ZeroMemory( &bd, sizeof(bd) ); bd.Usage = D3D11_USAGE_DYNAMIC; bd.ByteWidth = vertCount * vertexTypeWidth; bd.BindFlags = D3D11_BIND_VERTEX_BUFFER; //bd.CPUAccessFlags = 0; bd.CPUAccessFlags = D3D11_CPU_ACCESS_WRITE; D3D11_SUBRESOURCE_DATA InitData; ZeroMemory( &InitData, sizeof(InitData) ); InitData.pSysMem = vertices; mVertexType = vertexType; HRESULT hResult = device->CreateBuffer( &bd, &InitData, &m_pVertexBuffer ); // This will be S_OK if(hResult != S_OK) return false; Remap code D3D11_MAPPED_SUBRESOURCE resource; HRESULT hResult = deviceContext->Map(m_pVertexBuffer, 0, D3D11_MAP_WRITE_DISCARD, 0, &resource); // This will be S_OK if(hResult != S_OK) return false; resource.pData = vertices; deviceContext->Unmap(m_pVertexBuffer, 0);

    Read the article

  • SQL SERVER – Get All the Information of Database using sys.databases

    - by pinaldave
    Earlier I wrote blog article SQL SERVER – Finding Last Backup Time for All Database. In the response of this article I have received very interesting script from SQL Server Expert Matteo as a comment in the blog. He has written script using sys.databases which provides plenty of the information about database. I suggest you can run this on your database and know unknown of your databases as well. SELECT database_id, CONVERT(VARCHAR(25), DB.name) AS dbName, CONVERT(VARCHAR(10), DATABASEPROPERTYEX(name, 'status')) AS [Status], state_desc, (SELECT COUNT(1) FROM sys.master_files WHERE DB_NAME(database_id) = DB.name AND type_desc = 'rows') AS DataFiles, (SELECT SUM((size*8)/1024) FROM sys.master_files WHERE DB_NAME(database_id) = DB.name AND type_desc = 'rows') AS [Data MB], (SELECT COUNT(1) FROM sys.master_files WHERE DB_NAME(database_id) = DB.name AND type_desc = 'log') AS LogFiles, (SELECT SUM((size*8)/1024) FROM sys.master_files WHERE DB_NAME(database_id) = DB.name AND type_desc = 'log') AS [Log MB], user_access_desc AS [User access], recovery_model_desc AS [Recovery model], CASE compatibility_level WHEN 60 THEN '60 (SQL Server 6.0)' WHEN 65 THEN '65 (SQL Server 6.5)' WHEN 70 THEN '70 (SQL Server 7.0)' WHEN 80 THEN '80 (SQL Server 2000)' WHEN 90 THEN '90 (SQL Server 2005)' WHEN 100 THEN '100 (SQL Server 2008)' END AS [compatibility level], CONVERT(VARCHAR(20), create_date, 103) + ' ' + CONVERT(VARCHAR(20), create_date, 108) AS [Creation date], -- last backup ISNULL((SELECT TOP 1 CASE TYPE WHEN 'D' THEN 'Full' WHEN 'I' THEN 'Differential' WHEN 'L' THEN 'Transaction log' END + ' – ' + LTRIM(ISNULL(STR(ABS(DATEDIFF(DAY, GETDATE(),Backup_finish_date))) + ' days ago', 'NEVER')) + ' – ' + CONVERT(VARCHAR(20), backup_start_date, 103) + ' ' + CONVERT(VARCHAR(20), backup_start_date, 108) + ' – ' + CONVERT(VARCHAR(20), backup_finish_date, 103) + ' ' + CONVERT(VARCHAR(20), backup_finish_date, 108) + ' (' + CAST(DATEDIFF(second, BK.backup_start_date, BK.backup_finish_date) AS VARCHAR(4)) + ' ' + 'seconds)' FROM msdb..backupset BK WHERE BK.database_name = DB.name ORDER BY backup_set_id DESC),'-') AS [Last backup], CASE WHEN is_fulltext_enabled = 1 THEN 'Fulltext enabled' ELSE '' END AS [fulltext], CASE WHEN is_auto_close_on = 1 THEN 'autoclose' ELSE '' END AS [autoclose], page_verify_option_desc AS [page verify option], CASE WHEN is_read_only = 1 THEN 'read only' ELSE '' END AS [read only], CASE WHEN is_auto_shrink_on = 1 THEN 'autoshrink' ELSE '' END AS [autoshrink], CASE WHEN is_auto_create_stats_on = 1 THEN 'auto create statistics' ELSE '' END AS [auto create statistics], CASE WHEN is_auto_update_stats_on = 1 THEN 'auto update statistics' ELSE '' END AS [auto update statistics], CASE WHEN is_in_standby = 1 THEN 'standby' ELSE '' END AS [standby], CASE WHEN is_cleanly_shutdown = 1 THEN 'cleanly shutdown' ELSE '' END AS [cleanly shutdown] FROM sys.databases DB ORDER BY dbName, [Last backup] DESC, NAME Please let me know if you find this information useful. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, Readers Contribution, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • Advantages to Server Scripting languages over Client Side Scripting languages

    There are numerous advantages to server scripting languages over client side scripting languages in regards to creating web sites that are more compelling compared to a standard static site. Server side scripts are executed on a web server during the compilation of data to return to a client. These scripts allow developers to modify the content that is being sent to the user prior to the return of the data to the user as well as store information about the user. In addition, server side scripts allow for a controllable environment in which they can be executed. This cannot be said for client side languages because the developer cannot control the users’ environment compared to a web server. Some users may turn off client scripts, some may be only allow limited access on the system and others may be able to gain full control of the environment.  I have been developing web applications for over 9+ years, and I have used server side languages for most of the applications I have built.  Here is a list of common things I have developed with server side scripts. List of Common Generic Functionality: Send Email FTP Files Security/ Access Control Encryption URL rewriting Data Access Data Creation I/O Access The one important feature server side languages will help me with on my website is Data Access because my component will be backed with a SQL server database. I believe that form validation is one instance where I might see server side and client side scripts used interchangeably because it does not matter how or where the data is validated as long as the data that gets inserted is valid.

    Read the article

  • How do I align my partition table properly?

    - by Jorge Castro
    I am in the process of building my first RAID5 array. I've used mdadm to create the following set up: root@bondigas:~# mdadm --detail /dev/md1 /dev/md1: Version : 00.90 Creation Time : Wed Oct 20 20:00:41 2010 Raid Level : raid5 Array Size : 5860543488 (5589.05 GiB 6001.20 GB) Used Dev Size : 1953514496 (1863.02 GiB 2000.40 GB) Raid Devices : 4 Total Devices : 4 Preferred Minor : 1 Persistence : Superblock is persistent Update Time : Wed Oct 20 20:13:48 2010 State : clean, degraded, recovering Active Devices : 3 Working Devices : 4 Failed Devices : 0 Spare Devices : 1 Layout : left-symmetric Chunk Size : 64K Rebuild Status : 1% complete UUID : f6dc829e:aa29b476:edd1ef19:85032322 (local to host bondigas) Events : 0.12 Number Major Minor RaidDevice State 0 8 16 0 active sync /dev/sdb 1 8 32 1 active sync /dev/sdc 2 8 48 2 active sync /dev/sdd 4 8 64 3 spare rebuilding /dev/sde While that's going I decided to format the beast with the following command: root@bondigas:~# mkfs.ext4 /dev/md1p1 mke2fs 1.41.11 (14-Mar-2010) /dev/md1p1 alignment is offset by 63488 bytes. This may result in very poor performance, (re)-partitioning suggested. Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) Stride=16 blocks, Stripe width=48 blocks 97853440 inodes, 391394047 blocks 19569702 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=0 11945 block groups 32768 blocks per group, 32768 fragments per group 8192 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968, 102400000, 214990848 Writing inode tables: ^C 27/11945 root@bondigas:~# ^C I am unsure what to do about "/dev/md1p1 alignment is offset by 63488 bytes." and how to properly partition the disks to match so I can format it properly.

    Read the article

  • CodePlex Daily Summary for Friday, June 04, 2010

    CodePlex Daily Summary for Friday, June 04, 2010New Projects23 Umbraco addons: 23 Umbraco addonsAdd-ons for EPiServer Relate+: In the Add-ons for EPiServer Relate+ you will find add-ons, extensions and modules that work together with EPiServer Relate+.Advanced Mail Merge (AMM) for Microsoft Office: Advanced Mail Merge for Microsoft Word 2007/2010, offers great extensable functionality: - Merge to document (PDF) - Merge to attachment - Use Out...Cenobith RLS Sample: Simple implementation of Row Level Security for Microsoft SQL ServerCodingWheels.DataTypes: DataTypes tries to make it easier for developers to have concrete typesafe objects for working with many common forms of data. Many times these dat...DigitArchive: Digit Archive makes it easy for the DIGIT magazine readers to find the correct software or movie bundled in the media along with the magazine. You'...dNet.DB: dNetDB is a .net framework that simplifies model and data access by providing a database independent object-based persistence, where objects are pe...Dynamic Application Framework: The Dynamic Application Framework provides a highly flexible environment for creating applications. Multiple UI and Execution Environments, along w...ECoG: ECoG toolkitFB Toolkit with Contracts: This is a research project where I have inserted code contracts into the Facebook Toolkit source code., version 3.1 beta. This delivers an efficien...GeneCMS: GeneCMS allows users to generate static HTML based websites by offering an ASP.NET editing front-end that can be run in the local machine. It is ta...HooIzDat: HooIzDat is game that asks, who the heck is that?! It's a two player game where your task is to guess your opponent's person before he or she guess...JingQiao.Interacting: JingQiao Interacting MessagingKanbanBoard: Visual task board for Kanban and Scrum.Learning CSharp: Just Learning CSharpMammoth: mammothMapWindow Mobile: MapWindow Mobile is mobile GIS Software which can run on windows mobile, developed in C# .NET Compact Framework. It provides basic GIS functionalit...Mindless Setback: Setback is a card game popular in New England. This project uses a combination of brute force and Monte Carlo methods to play Setback. This is an e...MSNCore(DirectUI) Element Viewer: MSNCore Element Viewer is an application designed to enumerate the elements with in applications built with MSNCore.dll and UXCore.dll. This appli...MSVN Team: bài tập thầy lườngNugget: Web Socket Server: A web socket server implemented in c#. The goal of the projects is to create an easy way to start using HTML5 web sockets in .NET web applications.oSoft ColorPicker Control for Visual Studio 2010: oSoft ColorPicker is an user control that can be used instead of the ColorDialog when you want to allow your users to select a color in a windows f...Prism Software Factory: The Prism Software Factory is a software factory for Visual Studio 2010 assisting developers in the process of building WPF & Silverlight applicati...Project Lion: Project lion is forum developed in Silverlight technology. Refix - .NET dependency management: Refix is an attempt to solve the problem of binary dependency management in large .NET solutions. It will achieve the goal using (amongst other thi...Rich Task List: Rich Task List is a tutorial project for DotNetNuke Module Development.SharePoint PowerRSS: Easy/Clean way to get SharePoint list data via more standard RSS feed. I found CleanRSS.aspx as part of SPRSS: Enhanced RSS Functionality for WSS ...SOAPI - StackOverflow API Generator: Generates, directly from the self documenting StackOverflow API specification, an end-to-end, fully documented API wrapper library with Visual Stu...SQL Script Application Utility: This C# project allows you to apply scripts to a database for table creation, data creation, etc. You can keep DDL in separate SQL scripts which c...Sql Server Reports Viewer: Sql Server Reports Viewer makes it easier to render Sql Server Reports without the need to setup a SSRS Server. This makes deployments a breeze. ...StorageHD: StorageHD system for large video filesUrzaGatherer: UrzaGatherer is a WPF 4.0 client application to handle Magic The Gathering cards collections. You can manage expansions, blocks and all informatio...webrel: This tool executes simple relational algebra expressions. It is useful for learning of Database course. Javascript and xhtml is used to develop thi...World Wide Grab: World Wide Grab allows retrieval and integration of various semi-structured data sorces, expecially Web applications. It turns every available res...New Releases3FD - Framework For Fast Development (C++): Alpha 3: This release was compiled in Visual Studio Release mode. It means you can use it in whatever compiler you want. However, the compatibility with ano...Advanced Mail Merge (AMM) for Microsoft Office: Advanced MailMerge 2007.zip: Release 1.1.0.0Army Bodger: Bodger 3 Archetype Test: Ok so it's later and I've largely finished it. Right now the Space Wolves have their Troops written and one HQ unit. The equipment panel largely wo...AwesomiumDotNet: AwesomiumDotNet 1.6 beta: Preview of AwesomiumDotNet 1.6.Bojinx: Bojinx Core V4.6: New features in this release: Greatly improved logging for INFO and DEBUG. Improved the getClassName function in ObjectUtils. Added the ability ...Cenobith RLS Sample: Sample App: Change connection strings in App.config and Web.config files.Christoc's DotNetNuke C# Module Development Template: 00.00.02: A minor update from the original release with a few fixes including Localization and some updated documentation.Community Forums NNTP bridge: Community Forums NNTP Bridge V25: Release of the Community Forums NNTP Bridge to access the social and anwsers MS forums with a single, open source NNTP bridge. This release has ad...DEWD: DEWD for Umbraco v1.0: Beta release of the package. Functional feature set and fairly stable. Since the alpha: Validation on input fields Custom view controls Ability...DotNetNuke Developers Help File: DNNHelpSystem 05.04.02: Release of the developer core API help documentation of DotNetNuke in MSDN style format, both as .CHM stand alone file as well as a html website ba...Drive Backup: Drive Backup V.0604: This release includes the following fixes/features: * Fixed incompatibility with some USB drives (those marked as “fixed” by Windows) * Ad...Event Scavenger: Version 3.3 (Refresh): Archiving bit added to database plus archiving stored procedure updated. Rest of items just refreshed. Database set to version 3.3Expression Encoder Batch Processor: Expression Batch v0.3: Now set the newly-converted file's Created DateTime to equal the source file's. This helps keep your videos sorterd chronologically in media librar...Folder Bookmarks: Folder Bookmarks 1.6.1: The latest version of Folder Bookmarks (1.6.1), with Mini-Menu bug fixes and 'Help' feature - all the instructions needed to use the software (If y...Genesis Smart Client Framework: Genesis v2.0 - Ruby User Experience Platform (UXP): This is the start of the rewrite of the entire framework. The rewrite will include support for XAML through WPF and Silverlight, WCF, Workflow Serv...Global: http requester tool: Added a brnad new console app for making http requests.GMap.NET - Great Maps for Windows Forms & Presentation: Hot Build: this is latest change-set build, unstable previewHERB.IQ: Alpha 0.1 Source code release 4: As of 6-23-10 @ 9:48ESTInfragistics Analytics Framework: Infragistics Analytics Framework 1.0: This project includes wrappers for the Infragistics controls that integrate with the recently launched Microsoft Silverlight Analytics Framework. T...Innovative Games: Cube Mapper: Cube Mapper is a small tool that takes in six textures and outputs a cube map that is a combination of the six textures. Cube Mapper supports .tga...jQuery Library for SharePoint Web Services: SPServices 0.5.6: This release is in an alpha state. Please only download it if you know what you are getting and are willing to test it. In any case, it's a bad ide...linq to jquery: jlinq v1.00 no doc: First public version of jlinq! no doc yet, soon too come!LinqSpecs: Version 1.0.1: Fabio Maulo has sent several patchs in order to make LinqSpecs to work with any linq provider other than in memory. Big KUDOS for him.mojoPortal: 2.3.4.4: see release notes on mojoportal.com Note that we have separate deployment packages for .NET 3.5 and .NET 4.0 The deployment package downloads on ...Nugget: Web Socket Server: Initial POC release: The initial proof of concept release. To try it out, open the Sample App.sln, set the ChatServer project as the start-up project, start debugging ...oSoft ColorPicker Control for Visual Studio 2010: oSoft ColorPicker Control for VS 2010 Beta 1: Beta 1Refix - .NET dependency management: Refix v0.1.0.48 ALPHA: First preview version of Refix command-line tool.SharePoint 2010 CSV Bulk Term Set Importer: Bulk Term Set Importer: Initial ReleaseSOAPI - StackOverflow API Generator: SOAPI Wrappers: SOAPI-JS First release as SOAPI-JS, SOAPI-CS coming shortly. Tests and example includedSQL Compact Toolbox: Beta 0.8.1: Initial test release - mind the bumps. Requires Visual Studio 2010.Thumb nail creator and image resizer: ThumbnailCreator1.2: this release fixes and issue that was occuring when the control was used inside paged dataTS3QueryLib.Net: TS3QueryLib.Net Version 0.23.17.0: Changelog Added Properties "IsSpacer" and "SpacerInfo" to ChannelListEntry. "IsSpacer" allows you to check whether the channel is a spacer channel ...UI Accessibility Checker: UI Accessibility Checker v.2.0: We are excited to announce the release of AccChecker 2.0! In addition to numerous bug fixes and usability improvements, these major features have...webrel: webrel 1.0: webrel 1.0WindStyle SlugHelper for Windows Live Writer: 1.2.0.0: 增加:可以配置是否忽略已经包含slug的日志,请在插件选项中配置; 增加:插件图标; 更新:支持最新Windows Live Writer,版本号14.0.8117.416。Work Recorder - Hold on own time!: WorkRecorder 1.1: +Only one instance can run #Change histogram to pie chartMost Popular ProjectsWBFS ManagerRawrAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)PHPExcelpatterns & practices – Enterprise LibraryMicrosoft SQL Server Community & SamplesASP.NETMost Active ProjectsCommunity Forums NNTP bridgeRawrIonics Isapi Rewrite Filterpatterns & practices – Enterprise LibraryGMap.NET - Great Maps for Windows Forms & PresentationN2 CMSBlogEngine.NETFarseer Physics EngineMain projectMirror Testing System

    Read the article

  • Using Oracle Enterprise Manager Ops Center to Update Solaris via Live Upgrade

    - by LeonShaner
    Introduction: This Oracle Enterprise Manager Ops Center blog entry provides tips for using Ops Center to update Solaris using Live Upgrade on Solaris 10 and Boot Environments on Solaris 11. Why use Live Upgrade? Live Upgrade (LU) can significantly reduce downtime associated with patching Live Upgrade avoids dropping to single-user mode for long periods of time during patching Live Upgrade relies on an Alternate Boot Environment (ABE)/(BE), which is patched while in multi-user mode; thereby allowing normal system operations to continue with the active BE, while the alternate BE is being patched Activating an newly patched (A)BE is essentially a reboot; therefore the downtime is ~= reboot Admins can easily revert to the prior Boot Environment (BE) as a safeguard / fallback. Why use Ops Center to patch via Live Upgrade, Alternate Boot Environments, and Solaris 11 equivalents? All the benefits of Ops Center's extensive patch and package knowledge base can be leveraged on top of Live Upgrade Ops Center can orchestrate patching based on Live Upgrade and Solaris 11 features, which all works together to minimize downtime Ops Centers advanced inventory and reporting features assurance that each OS is updated to a verifiable, consistent standard, rather than relying on ad-hoc (error prone) procedures and scripts Ops Center gives admins control over the boot environment specifications or they can let Ops Center decide when a BE is necessary, thereby reducing complexity and lowering the opportunity for user error Preparing to use Live Upgrade-like features in Solaris 11 Requirements and information you should know: Global Zone Root file-systems must be separate from Solaris Container / Zone filesystems Solaris 11 has features which are similar in concept to Live Upgrade on Solaris 10, but differ greatly in implementationImportant distinctions: Solaris 11 assumes ZFS root Solaris 11 adds Boot Environments (BE's) as an integrated feature (see beadm) Solaris 11 BE's avoid single-user patching (vs. Solaris 10 w/ ZFS snapshot=ABE). Solaris 11 Image Packaging System (IPS) has hooks for BE creation, as needed Solaris 11 allows pkgs to be installed + upgraded in alternate BE (e.g. instead of the live system) but it is controlled on a per-pkg basis Boot Environments are activated across a reboot; instead of spending long periods installing + upgrading packages in single user mode. Fallback to a prior BE is a function of the BE infrastructure (a la beadm). (Generally) Reboot + BE activation can be much much faster on Solaris 11 Preparing to use Live Upgrade on Solaris 10 Requirements and information you should know: Global Zone Root file-systems must be separate from Solaris Container / Zone filesystems Live Upgrade Pre-requisite patches must be applied before the first Live Upgrade Alternate Boot Environments are created (see "Pre-requisite Patches" section, below...) Solaris 10 Update 6 or newer on ZFS root is the practical starting point for Live Upgrade Live Upgrade with ZFS root is far more straight-forward than any scheme based on Alternative Boot Environments in slices or temporarily breaking mirrors Use Solaris best practices to upgrade the OS to at least Solaris 10 Update 4 (outside of Ops Center) UFS root can (technically) be used, but it is significantly more involved (e.g. discouraged) -- there are many reasons to move to ZFS while going through the process to update to Solaris 10 Update 6 or newer (out side of Ops Center) Recommendation: Start with Solaris 10 Update 6 or newer on ZFS root Recommendation: Start with Ops Center 12c or newer Ops Center 12c can automatically create your ABE's for you, without the need for custom scripts Ops Center 12c Update 2 avoids kernel panic on unpatched Solaris 10 update 9 (and older) -- unrelated to Live Upgrade, but more on the issue, below. NOTE: There is no magic!  If you have systems running Solaris 10 Update 5 or older on UFS root, and you don't know how to get them updated to Solaris 10 on ZFS root, then there are services available from Oracle Advanced Customer Support (ACS), which specialize in this area. Live Upgrade Pre-requisite Patches (Solaris 10) Certain Live Upgrade related patches must be present before the first Live Upgrade ABE's are created on Solaris 10.Use the following MOS Search String to find the “living document” that outlines the required patch minimums, which are necessary before using any Live Upgrade features: Solaris Live Upgrade Software Patch Requirements(Click above – the link is valid as of this writing, but search in MOS for the same "Solaris Live Upgrade Software Patch Requirements" string if necessary) It is a very good idea to check the document periodically and adapt to its contents, accordingly.IMPORTANT:  In case it wasn't clear in the above document, some direct patching of the active OS, including a reboot, may be required before Live Upgrade can be successfully used the first time.HINT: You can use Ops Center to determine what to expect for a given system, and to schedule the “pre-patching” during a maintenance window if necessary. Preparing to use Ops Center Discover + Manage (Install + Configure the Ops Center agent in) each Global Zone Recommendation:  Begin by using OCDoctor --agent-prereq to determine whether OS meets OC prerequisites (resolve any issues) See prior requirements and recommendations w.r.t. starting with Solaris 10 Update 6 or newer on ZFS (or at least Solaris 10 Update 4 on UFS, with caveats) WARNING: Systems running unpatched Solaris 10 update 9 (or older) should run the Ops Center 12c Update 2 agent to avoid a potential kernel panic The 12c Update 2 agent will check patch minimums and disable certain process accounting features if the kernel is not sufficiently patched to avoid the panic SPARC: 142900-05 Obsoleted by: 142900-06 SunOS 5.10: kernel patch 10 Oracle Solaris on SPARC (32-bit) X64: 142901-05 Obsoleted by: 142901-06 SunOS 5.10_x86: kernel patch 10 Oracle Solaris on x86 (32-bit) OR SPARC: 142909-17 SunOS 5.10: kernel patch 10 Oracle Solaris on SPARC (32-bit) X64: 142910-17 SunOS 5.10_x86: kernel patch 10 Oracle Solaris on x86 (32-bit) Ops Center 12c (initial release) and 12c Update 1 agent can also be safely used with a workaround (to be performed BEFORE installing the agent): # mkdir -p /etc/opt/sun/oc # echo "zstat_exacct_allowed=false" > /etc/opt/sun/oc/zstat.conf # chmod 755 /etc/opt/sun /etc/opt/sun/oc # chmod 644 /etc/opt/sun/oc/zstat.conf # chown -Rh root:sys /etc/opt/sun/oc NOTE: Remove the above after patching the OS sufficiently, or after upgrading to the 12c Update 2 agent Using Ops Center to apply Live Upgrade-related Pre-Patches (Solaris 10)Overview: Create an OS Update Profile containing the minimum LU-related pre-patches, based on the Solaris Live Upgrade Software Patch Requirements, previously mentioned. SIMULATE the deployment of the LU-related pre-patches Observe whether any of the LU-related pre-patches will require a reboot The job details for each Global Zone will advise whether a reboot step will be required ACTUALLY deploy the LU-related pre-patches, according to your change control process (e.g. if no reboot, maybe okay to do now; vs. must do later because of the reboot). You can schedule the job to occur later, during a maintenance window Check the job status for each node, resolving any issues found Once the LU-related pre-patches are applied, you can Ops Center to patch using Live Upgrade on Solaris 10 Using Ops Center to patch Solaris 10 with LU/ABE's -- the GOODS!(this is the heart of the tip): Create an OS Update Profile containing the patches that make up your standard build Use Solaris Baselines when possible Add other individual patches as needed ACTUALLY deploy the OS Update Profile Specify the appropriate Live Upgrade options, e.g. Synchronize the active BE to the alternate BE before patching Do not activate the BE after patching Check the job status for each node, resolving any issues found Activate the newly patched BE according to your change control process Activate = Reboot to the ABE, making the ABE the new active BE Ops Center does not separate LU activate from reboot, so expect a reboot! Check the job status for each node, resolving any issues found Examples (w/Screenshots) Solaris 10 and Live Upgrade: Auto-Create the Alternate Boot Environment (ZFS root only) ABE to be created on ZFS with name S10_12_07REC (Example) Uses built in feature to call “lucreate -n S10_12_07REC” behind scenes if not already present NOTE: Leave “lucreate” params blank (if you do specify options, the will be appended after -n $ABEName) Solaris 10 and Live Upgrade: Alternate Boot Environment Creation via Operational Profile (script) The Alternate Boot Environment is to be created via custom, user-supplied script, which does whatever is needed for the system where Live Upgrade will be used. Operational Profile, which provides the script to create an ABE: Very similar to the automatic case, but with a Script (Operational Profile), which is used to create the ABE Relies on user-supplied script in the form of an Operational Profile Could be used to prepare an ABE based on a UFS root in a slice, or on a separate device (e.g. by breaking a mirror first) – it is up to the script author to do the right thing! EXAMPLE: Same result as the ZFS case, but illustrating the Operational Profile (e.g. script) approach to call: # lucreate -n S10_1207REC NOTE: OC special variable is $ABEName Boot Environment Profile, which references the Operational Profile Script = Operational Profile on this screen Refers to Operational Profile shown in the previous section The user-supplied S10_Create_BE Operational Profile will be run The Operational Profile must send a non-zero exit code if there is a problem (so that the OS Update job will not proceed) Solaris 10 OS Update Profile (to provide the actual patch specifications) Solaris 10 Baseline “Recommended” chosen for “Install” Solaris 10 OS Update Plan (two-steps in this case) “Create a Boot Environment” + “Update OS” are chosen. Using Ops Center to patch Solaris 11 with Boot Environments (as needed) Create a Solaris 11 OS Update Profile containing the packages that make up your standard build ACTUALLY deploy the Solaris 11 OS Update Profile BE will be created if needed (or you can stipulate no BE) BE name will be auto-generated (if needed), or you may specify a BE name Check the job status for each node, resolving any issues found Check if a BE was created; if so, activate the new BE Activate = Reboot to the BE, making the new BE the active BE Ops Center does not separate BE activate from reboot NOTE: Not every Solaris 11 OS Update will require a new BE, so a reboot may not be necessary. Solaris 11: Auto BE Create (as Needed -- let Ops Center decide) BE to be created as needed BE to be named automatically Reboot (if necessary) deferred to separate step Solaris 11: OS Profile Solaris 11 “entire” chosen for a particular SRU Solaris 11: OS Update Plan (w/BE)  “Create a Boot Environment” + “Update OS” are chosen. Summary: Solaris 10 Live Upgrade, Alternate Boot Environments, and their equivalents on Solaris 11 can be very powerful tools to help minimize the downtime associated with updating your servers.  For very old Solaris, there are some important prerequisites to adhere to, but once the initial preparation is complete, Live Upgrade can be used going forward.  For Solaris 11, the built-in Boot Environment handling is leveraged directly by the Image Packaging System, and the result is a much more straight forward way to patch, and far fewer prerequisites to satisfy in getting there.  Ops Center simplifies using either approach, and helps you improve consistency from system to system, which ultimately helps you improve the overall up-time across all the Solaris systems in your environment. Please let us know what you think?  Until next time...\Leon-- Leon Shaner | Senior IT/Product ArchitectSystems Management | Ops Center Engineering @ Oracle The views expressed on this [blog; Web site] are my own and do not necessarily reflect the views of Oracle. For more information, please go to Oracle Enterprise Manager  web page or  follow us at :  Twitter | Facebook | YouTube | Linkedin | Newsletter

    Read the article

  • SQL SERVER – 2008 – Missing Index Script – Download

    - by pinaldave
    Download Missing Index Script with Unused Index Script Performance Tuning is quite interesting and Index plays a vital role in it. A proper index can improve the performance and a bad index can hamper the performance. Here is the script from my script bank which I use to identify missing indexes on any database. Please note, if you should not create all the missing indexes this script suggest. This is just for guidance. You should not create more than 5-10 indexes per table. Additionally, this script sometime does not give accurate information so use your common sense. Any way, the scripts is good starting point. You should pay attention to Avg_Estimated_Impact when you are going to create index. The index creation script is also provided in the last column. Download Missing Index Script with Unused Index Script -- Missing Index Script -- Original Author: Pinal Dave (C) 2011 SELECT TOP 25 dm_mid.database_id AS DatabaseID, dm_migs.avg_user_impact*(dm_migs.user_seeks+dm_migs.user_scans) Avg_Estimated_Impact, dm_migs.last_user_seek AS Last_User_Seek, OBJECT_NAME(dm_mid.OBJECT_ID,dm_mid.database_id) AS [TableName], 'CREATE INDEX [IX_' + OBJECT_NAME(dm_mid.OBJECT_ID,dm_mid.database_id) + '_' + REPLACE(REPLACE(REPLACE(ISNULL(dm_mid.equality_columns,''),', ','_'),'[',''),']','') + CASE WHEN dm_mid.equality_columns IS NOT NULL AND dm_mid.inequality_columns IS NOT NULL THEN '_' ELSE '' END + REPLACE(REPLACE(REPLACE(ISNULL(dm_mid.inequality_columns,''),', ','_'),'[',''),']','') + ']' + ' ON ' + dm_mid.statement + ' (' + ISNULL (dm_mid.equality_columns,'') + CASE WHEN dm_mid.equality_columns IS NOT NULL AND dm_mid.inequality_columns IS NOT NULL THEN ',' ELSE '' END + ISNULL (dm_mid.inequality_columns, '') + ')' + ISNULL (' INCLUDE (' + dm_mid.included_columns + ')', '') AS Create_Statement FROM sys.dm_db_missing_index_groups dm_mig INNER JOIN sys.dm_db_missing_index_group_stats dm_migs ON dm_migs.group_handle = dm_mig.index_group_handle INNER JOIN sys.dm_db_missing_index_details dm_mid ON dm_mig.index_handle = dm_mid.index_handle WHERE dm_mid.database_ID = DB_ID() ORDER BY Avg_Estimated_Impact DESC GO Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Download, SQL Index, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • ASP.NET MVC localization DisplayNameAttribute alternatives: a good way

    - by Brian Schroer
    The ASP.NET MVC HTML helper methods like .LabelFor and .EditorFor use model metadata to autogenerate labels for model properties. By default it uses the property name for the label text, but if that’s not appropriate, you can use a DisplayName attribute to specify the desired label text: [DisplayName("Remember me?")] public bool RememberMe { get; set; } I’m working on a multi-language web site, so the labels need to be localized. I tried pointing the DisplayName attribute to a resource string: [DisplayName(MyResource.RememberMe)] public bool RememberMe { get; set; } …but that results in the compiler error "An attribute argument must be a constant expression, typeof expression or array creation expression of an attribute parameter type”. I got around this by creating a custom LocalizedDisplayNameAttribute class that inherits from DisplayNameAttribute: 1: public class LocalizedDisplayNameAttribute : DisplayNameAttribute 2: { 3: public LocalizedDisplayNameAttribute(string resourceKey) 4: { 5: ResourceKey = resourceKey; 6: } 7:   8: public override string DisplayName 9: { 10: get 11: { 12: string displayName = MyResource.ResourceManager.GetString(ResourceKey); 13:   14: return string.IsNullOrEmpty(displayName) 15: ? string.Format("[[{0}]]", ResourceKey) 16: : displayName; 17: } 18: } 19:   20: private string ResourceKey { get; set; } 21: } Instead of a display string, it takes a constructor argument of a resource key. The DisplayName method is overridden to get the display string from the resource file (line 12). If the key is not found, I return a formatted string containing the key (e.g. “[[RememberMe]]”) so I can tell by looking at my web pages which resource keys I haven’t defined yet (line 15). The usage of my custom attribute in the model looks like this: [LocalizedDisplayName("RememberMe")] public bool RememberMe { get; set; } That was my first attempt at localized display names, and it’s a technique that I still use in some cases, but in my next post I’ll talk about the method that I now prefer, a custom DataAnnotationsModelMetadataProvider class…

    Read the article

  • Fusion Middleware 11g (11.1.1.5.0)

    - by Hiro
    2011?7? (2011/07/12 ??)?Fusion Middleware 11gR1 ??????????11.1.1.5.0?????????????????????11.1.1.5.0?????????????????? 1. 11.1.1.5.0???????11.1.1.5.0???????·?????????????????????????????????????????????????11.1.1.5.0???????????????????????? Oracle WebLogic Server 11g (10.3.5) Oracle SOA Suite 11g (11.1.1.5.0)  - BAM, BPEL, BPM?? Oracle WebCenter Suite 11g (11.1.1.5.0) Oracle JDeveloper 11g (11.1.1.5.0) Oracle Identity Management 11g (11.1.1.5.0)  - OID, OIF, OVD?? Oracle Fusion Middleware Repository Creation Utility 11g (11.1.1.5.0) Oracle Enterprise Repository 11g (11.1.1.5.0) Oracle Enterprise Content Management 11g (11.1.1.5.0) Oracle Service Bus 11g (11.1.1.5.0) Oracle Fusion Middleware Web Tier Utilities 11g (11.1.1.5.0) Oracle Data Integrator 11g (11.1.1.5.0) 2. 11.1.1.5.0????????????????????????????????????????? Oracle Complex Event Processing 11g (11.1.1.4.0) Oracle Portal, Forms, Reports and Discoverer 11g (11.1.1.4.0) Oracle Business Process Analysis Suite 11g (11.1.1.3.0) Oracle Service Registry 11g (11.1.1.2.0) Oracle Identity and Access Management 11g (11.1.1.3.0)  - OAM, OAAM, OIM?? Oracle Tuxedo 11g (11.1.1.2.0) 3. Sparse Release (??????)?????????????(11.1.1.4 ???????)????????????????????????????????? Oracle Identity Management 11g (11.1.1.5.0) Oracle Fusion Middleware Web Tier Utilities 11g (11.1.1.5.0) ??????????? ?Oracle Fusion Middleware ?????????????????ReadMe 11g Release 1 (11.1.1.5.0)????????? 4. ???????Fixed Bugs List????????????Note#1316076.1 ?????????? (Note????????????????????) ?????

    Read the article

  • raid md device is not remove from memory, how to overcome this problem

    - by santhosha
    i create raid 10 , i removed two arrays form md11 one by one , after that i going to editing the contents those are mounted ( it will be not responding stage), after i try for remove arrays those are left it is shows device or resource busy ( is not removed from memory). i try to terminate process this is also not work, i absorve from 4 days resync will be 8.0% it can not modifying. cat /proc/mdstat Personalities : [raid1] [raid0] [raid6] [raid5] [raid4] [linear] [raid10] md11 : active raid10 sde1[3] sdj14 286743936 blocks 64K chunks 2 near-copies [4/1] [___U] [1:2:3:0] [=...................] resync = 8.0% (23210368/286743936) finish=289392.6min speed=15K/sec mdadm -D /dev/md11 /dev/md11: Version : 00.90.03 Creation Time : Sun Jan 16 16:20:01 2011 Raid Level : raid10 Array Size : 286743936 (273.46 GiB 293.63 GB) Device Size : 143371968 (136.73 GiB 146.81 GB) Raid Devices : 4 Total Devices : 2 Preferred Minor : 11 Persistence : Superblock is persistent Update Time : Sun Jan 16 16:56:07 2011 State : active, degraded, resyncing Active Devices : 1 Working Devices : 1 Failed Devices : 1 Spare Devices : 0 Layout : near=2, far=1 Chunk Size : 64K Rebuild Status : 8% complete UUID : 5e124ea4:79a01181:dc4110d3:a48576ea Events : 0.23 Number Major Minor RaidDevice State 0 0 0 0 removed 1 0 0 1 removed 4 8 145 2 faulty spare rebuilding /dev/sdj1 3 8 65 3 active sync /dev/sde1 umount /dev/md11 umount: /dev/md11: not mounted mdadm -S /dev/md11 mdadm: fail to stop array /dev/md11: Device or resource busy lsof /dev/md11 COMMAND PID USER FD TYPE DEVICE SIZE NODE NAME mount 2128 root 3r BLK 9,11 4058 /dev/md11 mount 5018 root 3r BLK 9,11 4058 /dev/md11 mdadm 27605 root 3r BLK 9,11 4058 /dev/md11 mount 30562 root 3r BLK 9,11 4058 /dev/md11 badblocks 30591 root 3r BLK 9,11 4058 /dev/md11 kill -9 2128 kill -9 5018 kill -9 27605 kill -9 30562 kill -3 30591 mdadm -S /dev/md11 mdadm: fail to stop array /dev/md11: Device or resource busy lsof /dev/md11 COMMAND PID USER FD TYPE DEVICE SIZE NODE NAME mount 2128 root 3r BLK 9,11 4058 /dev/md11 mount 5018 root 3r BLK 9,11 4058 /dev/md11 mdadm 27605 root 3r BLK 9,11 4058 /dev/md11 mount 30562 root 3r BLK 9,11 4058 /dev/md11 badblocks 30591 root 3r BLK 9,11 4058 /dev/md11 cat /proc/mdstat Personalities : [raid1] [raid0] [raid6] [raid5] [raid4] [linear] [raid10] md11 : active raid10 sde1[3] sdj14 286743936 blocks 64K chunks 2 near-copies [4/1] [___U] [1:2:3:0] [=...................] resync = 8.0% (23210368/286743936) finish=289392.6min speed=15K/sec

    Read the article

  • gl_PointCoord always zero

    - by Jonathan
    I am trying to draw point sprites in OpenGL with a shader but gl_PointCoord is always zero. Here is my code Setup: //Shader creation..(includes glBindAttribLocation(program, ATTRIB_P, "p");) glEnableVertexAttribArray(ATTRIB_P); In the rendering loop: glUseProgram(shader_particles); float vertices[]={0.0f,0.0f,0.0f}; glEnable(GL_TEXTURE_2D); glEnable(GL_POINT_SPRITE); glEnable(GL_VERTEX_PROGRAM_POINT_SIZE); //glTexEnvi(GL_POINT_SPRITE, GL_COORD_REPLACE, GL_TRUE);(tried with this on/off, doesn't work) glVertexAttribPointer(ATTRIB_P, 3, GL_FLOAT, GL_FALSE, 0, vertices); glDrawArrays(GL_POINTS, 0, 1); Vertex Shader: attribute highp vec4 p; void main() { gl_PointSize = 40.0f; gl_Position = p; } Fragment Shader: void main() { gl_FragColor = vec4(gl_PointCoord.st, 0, 1);//if the coords range from 0-1, this should draw a square with black,red,green,yellow corners } But this only draws a black square with a size of 40. What am I doing wrong? Edit: Point sprites work when i use the fixed function, but I need to use shaders because in the end the code will be for opengl es 2.0 glUseProgram(0); glEnable(GL_TEXTURE_2D); glEnable(GL_POINT_SPRITE); glTexEnvi(GL_POINT_SPRITE, GL_COORD_REPLACE, GL_TRUE); glPointSize(40); glBegin(GL_POINTS); glVertex3f(0.0f,0.0f,0.0f); glEnd(); Is anyone able to get point sprites working with shader? If so, please share some code.

    Read the article

  • Looking ahead at 2011-with Paul Greenberg

    - by divya.malik
    It is almost the end of 2010, rather unbelievable how fast this year has gone by. It is always interesting to read what our CRM gurus have to say about the coming year. So here is CRM luminary, Paul Greenberg’s  forecast for 2011. Mobile CRM growth accelerates. CRM and “Social” companies continue to integrate their capabilities as a few suites begin to emerge. Social “rankings”, as a measure of customer engagement, will become a standard public measure. Analytics exhibits the most significant growth of any area with Customer Insight apps leading the way. Marketing apps mature with social marketing becoming an integral part of the application offering. Customer service begins to redefine itself with greater emphasis on service communities, web self-service and customer knowledge capture. Knowledge management replaces enterprise content management as a core requirement for large businesses. Customer experience reasserts itself loudly as the core of CRM and SCRM - This one is kind of a no-brainer in a way. Co-creation and customer driven product innovation becomes more than just an advanced idea. Microsoft Azure emerges as a true cloud provider at the level of Amazon as cloud computing considers its rise to becoming a primary technology infrastructure. Application marketplaces will become commonplace as companies look to platform providers to fill ecosystem needs, not just CRM. I do encourage you to read the details of his forecasts, that are split into two blog posts. For Part I click here and for Part II, click here. Technorati Tags: oracle,siebel CRM,scrm,paul greenberg

    Read the article

  • Installation procedure RAC One Node

    - by rene.kundersma
    Okay, In order to test RAC One Node, on my Oracle VM Laptop, I just: - installed Oracle VM 2.2 - Created two OEL 5.3 images The two images are fully prepared for Oracle 11gr2 Grid Infrastructure and 11gr2 RAC including four shared disks for ASM and private nics. After installation of the Oracle 11gr2 Grid Infrastructure and a "software only installation" of 11gr2 RAC, I installed patch 9004119 as you can see with the opatch lsinv output: This patch has the scripts required to administer RAC One Node, you will see them later. At the moment we have them available for Linux and Solaris. After installation of the patch, I created a RAC database with an instance on one node. Please note that the "Global Database Name" has to be the same as the SID prefix and should be less then or equal to 8 characters: When the database creation is done, first I create a service. This is because RAC One Node needs to be "initialized" each time you add a service: The service configuration details are: After creating the service, a script called raconeinit needs to run from $RDBMS_HOME/bin. This is a script supplied by the patch. I can imagine the next major patch set of 11gr2 has this scripts available by default. The script will configure the database to run on other nodes: After initialization, when you would run raconeinit again, you would see: So, now the configuration is ready and we are ready to run 'Omotion' and move the service around from one node to the other (yes, vm competitor: this is service is available during the migration, nice right ?) . Omotion is started by running Omotion. With Omotion -v you get verbose output: So, during the migration you will see the two instance active: And, after the migration, there is only one instance left on the new node:

    Read the article

  • Schmelp Portal, Help Portal: Oracle Fusion Applications Help Online

    - by ultan o'broin
    Yes, the Oracle Fusion Applications Help (or "Help Portal" to us insiders) is now available. Click the link fusionhelp.oracle.com and check it out! Oracle Fusion Applications Help user interface If you're developing your own help for Fusion Apps, then you can use the newly published Oracle Fusion Help User Interface Guidelines to understand the best usage. These guidelines are also a handy way to get to the embedded help design patterns for Oracle Fusion Applications, now also available. To customize and extend the help content itself no longer requires the engagement of your IT Department or expensive project work. Customers can now use the Manage Custom Help capability to edit or add whatever content they need, make it secure and searchable, and develop a community around it too. You can see more of that capability in this slideshare.net presentation from UKOUG Ireland 2012 about the Oracle Fusion Applications User Assistance and Support Ecosystem by Ultan O'Broin and Richard Bingham. Manage Custom Help capability To understand the science and craft that went into the creation and delivery of the "Help Portal" (cardiac arrests all round in Legal and Marketing Depts), then check out this great white paper by Ultan O'Broin and Laurie Pattison: Putting the User into Oracle Fusion Applications User Assistance. So, what's with this "Help Portal" name? Well, that's an internal (that is, internal to Oracle) name only and we should all really call it by the correct product listing name: Oracle Fusion Applications Help. To be honest, I don't care what you call it as long as it is useful. However, these internal names can be problematic when talking with support or the licensing people. For years, we referred casually to the Oracle Applications Help or Oracle Applications Help System that ships with the Oracle E-Business Suite products as "iHelp". Then, somebody went and bought Siebel. Game over.

    Read the article

  • Running a simple integration scenario using the Oracle Big Data Connectors on Hadoop/HDFS cluster

    - by hamsun
    Between the elephant ( the tradional image of the Hadoop framework) and the Oracle Iron Man (Big Data..) an english setter could be seen as the link to the right data Data, Data, Data, we are living in a world where data technology based on popular applications , search engines, Webservers, rich sms messages, email clients, weather forecasts and so on, have a predominant role in our life. More and more technologies are used to analyze/track our behavior, try to detect patterns, to propose us "the best/right user experience" from the Google Ad services, to Telco companies or large consumer sites (like Amazon:) ). The more we use all these technologies, the more we generate data, and thus there is a need of huge data marts and specific hardware/software servers (as the Exadata servers) in order to treat/analyze/understand the trends and offer new services to the users. Some of these "data feeds" are raw, unstructured data, and cannot be processed effectively by normal SQL queries. Large scale distributed processing was an emerging infrastructure need and the solution seemed to be the "collocation of compute nodes with the data", which in turn leaded to MapReduce parallel patterns and the development of the Hadoop framework, which is based on MapReduce and a distributed file system (HDFS) that runs on larger clusters of rather inexpensive servers. Several Oracle products are using the distributed / aggregation pattern for data calculation ( Coherence, NoSql, times ten ) so once that you are familiar with one of these technologies, lets says with coherence aggregators, you will find the whole Hadoop, MapReduce concept very similar. Oracle Big Data Appliance is based on the Cloudera Distribution (CDH), and the Oracle Big Data Connectors can be plugged on a Hadoop cluster running the CDH distribution or equivalent Hadoop clusters. In this paper, a "lab like" implementation of this concept is done on a single Linux X64 server, running an Oracle Database 11g Enterprise Edition Release 11.2.0.4.0, and a single node Apache hadoop-1.2.1 HDFS cluster, using the SQL connector for HDFS. The whole setup is fairly simple: Install on a Linux x64 server ( or virtual box appliance) an Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 server Get the Apache Hadoop distribution from: http://mir2.ovh.net/ftp.apache.org/dist/hadoop/common/hadoop-1.2.1. Get the Oracle Big Data Connectors from: http://www.oracle.com/technetwork/bdc/big-data-connectors/downloads/index.html?ssSourceSiteId=ocomen. Check the java version of your Linux server with the command: java -version java version "1.7.0_40" Java(TM) SE Runtime Environment (build 1.7.0_40-b43) Java HotSpot(TM) 64-Bit Server VM (build 24.0-b56, mixed mode) Decompress the hadoop hadoop-1.2.1.tar.gz file to /u01/hadoop-1.2.1 Modify your .bash_profile export HADOOP_HOME=/u01/hadoop-1.2.1 export PATH=$PATH:$HADOOP_HOME/bin export HIVE_HOME=/u01/hive-0.11.0 export PATH=$PATH:$HADOOP_HOME/bin:$HIVE_HOME/bin (also see my sample .bash_profile) Set up ssh trust for Hadoop process, this is a mandatory step, in our case we have to establish a "local trust" as will are using a single node configuration copy the new public keys to the list of authorized keys connect and test the ssh setup to your localhost: We will run a "pseudo-Hadoop cluster", in what is called "local standalone mode", all the Hadoop java components are running in one Java process, this is enough for our demo purposes. We need to "fine tune" some Hadoop configuration files, we have to go at our $HADOOP_HOME/conf, and modify the files: core-site.xml hdfs-site.xml mapred-site.xml check that the hadoop binaries are referenced correctly from the command line by executing: hadoop -version As Hadoop is managing our "clustered HDFS" file system we have to create "the mount point" and format it , the mount point will be declared to core-site.xml as: The layout under the /u01/hadoop-1.2.1/data will be created and used by other hadoop components (MapReduce = /mapred/...) HDFS is using the /dfs/... layout structure format the HDFS hadoop file system: Start the java components for the HDFS system As an additional check, you can use the GUI Hadoop browsers to check the content of your HDFS configurations: Once our HDFS Hadoop setup is done you can use the HDFS file system to store data ( big data : )), and plug them back and forth to Oracle Databases by the means of the Big Data Connectors ( which is the next configuration step). You can create / use a Hive db, but in our case we will make a simple integration of "raw data" , through the creation of an External Table to a local Oracle instance ( on the same Linux box, we run the Hadoop HDFS one node cluster and one Oracle DB). Download some public "big data", I use the site: http://france.meteofrance.com/france/observations, from where I can get *.csv files for my big data simulations :). Here is the data layout of my example file: Download the Big Data Connector from the OTN (oraosch-2.2.0.zip), unzip it to your local file system (see picture below) Modify your environment in order to access the connector libraries , and make the following test: [oracle@dg1 bin]$./hdfs_stream Usage: hdfs_stream locationFile [oracle@dg1 bin]$ Load the data to the Hadoop hdfs file system: hadoop fs -mkdir bgtest_data hadoop fs -put obsFrance.txt bgtest_data/obsFrance.txt hadoop fs -ls /user/oracle/bgtest_data/obsFrance.txt [oracle@dg1 bg-data-raw]$ hadoop fs -ls /user/oracle/bgtest_data/obsFrance.txt Found 1 items -rw-r--r-- 1 oracle supergroup 54103 2013-10-22 06:10 /user/oracle/bgtest_data/obsFrance.txt [oracle@dg1 bg-data-raw]$hadoop fs -ls hdfs:///user/oracle/bgtest_data/obsFrance.txt Found 1 items -rw-r--r-- 1 oracle supergroup 54103 2013-10-22 06:10 /user/oracle/bgtest_data/obsFrance.txt Check the content of the HDFS with the browser UI: Start the Oracle database, and run the following script in order to create the Oracle database user, the Oracle directories for the Oracle Big Data Connector (dg1 it’s my own db id replace accordingly yours): #!/bin/bash export ORAENV_ASK=NO export ORACLE_SID=dg1 . oraenv sqlplus /nolog <<EOF CONNECT / AS sysdba; CREATE OR REPLACE DIRECTORY osch_bin_path AS '/u01/orahdfs-2.2.0/bin'; CREATE USER BGUSER IDENTIFIED BY oracle; GRANT CREATE SESSION, CREATE TABLE TO BGUSER; GRANT EXECUTE ON sys.utl_file TO BGUSER; GRANT READ, EXECUTE ON DIRECTORY osch_bin_path TO BGUSER; CREATE OR REPLACE DIRECTORY BGT_LOG_DIR as '/u01/BG_TEST/logs'; GRANT READ, WRITE ON DIRECTORY BGT_LOG_DIR to BGUSER; CREATE OR REPLACE DIRECTORY BGT_DATA_DIR as '/u01/BG_TEST/data'; GRANT READ, WRITE ON DIRECTORY BGT_DATA_DIR to BGUSER; EOF Put the following in a file named t3.sh and make it executable, hadoop jar $OSCH_HOME/jlib/orahdfs.jar \ oracle.hadoop.exttab.ExternalTable \ -D oracle.hadoop.exttab.tableName=BGTEST_DP_XTAB \ -D oracle.hadoop.exttab.defaultDirectory=BGT_DATA_DIR \ -D oracle.hadoop.exttab.dataPaths="hdfs:///user/oracle/bgtest_data/obsFrance.txt" \ -D oracle.hadoop.exttab.columnCount=7 \ -D oracle.hadoop.connection.url=jdbc:oracle:thin:@//localhost:1521/dg1 \ -D oracle.hadoop.connection.user=BGUSER \ -D oracle.hadoop.exttab.printStackTrace=true \ -createTable --noexecute then test the creation fo the external table with it: [oracle@dg1 samples]$ ./t3.sh ./t3.sh: line 2: /u01/orahdfs-2.2.0: Is a directory Oracle SQL Connector for HDFS Release 2.2.0 - Production Copyright (c) 2011, 2013, Oracle and/or its affiliates. All rights reserved. Enter Database Password:] The create table command was not executed. The following table would be created. CREATE TABLE "BGUSER"."BGTEST_DP_XTAB" ( "C1" VARCHAR2(4000), "C2" VARCHAR2(4000), "C3" VARCHAR2(4000), "C4" VARCHAR2(4000), "C5" VARCHAR2(4000), "C6" VARCHAR2(4000), "C7" VARCHAR2(4000) ) ORGANIZATION EXTERNAL ( TYPE ORACLE_LOADER DEFAULT DIRECTORY "BGT_DATA_DIR" ACCESS PARAMETERS ( RECORDS DELIMITED BY 0X'0A' CHARACTERSET AL32UTF8 STRING SIZES ARE IN CHARACTERS PREPROCESSOR "OSCH_BIN_PATH":'hdfs_stream' FIELDS TERMINATED BY 0X'2C' MISSING FIELD VALUES ARE NULL ( "C1" CHAR(4000), "C2" CHAR(4000), "C3" CHAR(4000), "C4" CHAR(4000), "C5" CHAR(4000), "C6" CHAR(4000), "C7" CHAR(4000) ) ) LOCATION ( 'osch-20131022081035-74-1' ) ) PARALLEL REJECT LIMIT UNLIMITED; The following location files would be created. osch-20131022081035-74-1 contains 1 URI, 54103 bytes 54103 hdfs://localhost:19000/user/oracle/bgtest_data/obsFrance.txt Then remove the --noexecute flag and create the external Oracle table for the Hadoop data. Check the results: The create table command succeeded. CREATE TABLE "BGUSER"."BGTEST_DP_XTAB" ( "C1" VARCHAR2(4000), "C2" VARCHAR2(4000), "C3" VARCHAR2(4000), "C4" VARCHAR2(4000), "C5" VARCHAR2(4000), "C6" VARCHAR2(4000), "C7" VARCHAR2(4000) ) ORGANIZATION EXTERNAL ( TYPE ORACLE_LOADER DEFAULT DIRECTORY "BGT_DATA_DIR" ACCESS PARAMETERS ( RECORDS DELIMITED BY 0X'0A' CHARACTERSET AL32UTF8 STRING SIZES ARE IN CHARACTERS PREPROCESSOR "OSCH_BIN_PATH":'hdfs_stream' FIELDS TERMINATED BY 0X'2C' MISSING FIELD VALUES ARE NULL ( "C1" CHAR(4000), "C2" CHAR(4000), "C3" CHAR(4000), "C4" CHAR(4000), "C5" CHAR(4000), "C6" CHAR(4000), "C7" CHAR(4000) ) ) LOCATION ( 'osch-20131022081719-3239-1' ) ) PARALLEL REJECT LIMIT UNLIMITED; The following location files were created. osch-20131022081719-3239-1 contains 1 URI, 54103 bytes 54103 hdfs://localhost:19000/user/oracle/bgtest_data/obsFrance.txt This is the view from the SQL Developer: and finally the number of lines in the oracle table, imported from our Hadoop HDFS cluster SQL select count(*) from "BGUSER"."BGTEST_DP_XTAB"; COUNT(*) ---------- 1151 In a next post we will integrate data from a Hive database, and try some ODI integrations with the ODI Big Data connector. Our simplistic approach is just a step to show you how these unstructured data world can be integrated to Oracle infrastructure. Hadoop, BigData, NoSql are great technologies, they are widely used and Oracle is offering a large integration infrastructure based on these services. Oracle University presents a complete curriculum on all the Oracle related technologies: NoSQL: Introduction to Oracle NoSQL Database Using Oracle NoSQL Database Big Data: Introduction to Big Data Oracle Big Data Essentials Oracle Big Data Overview Oracle Data Integrator: Oracle Data Integrator 12c: New Features Oracle Data Integrator 11g: Integration and Administration Oracle Data Integrator: Administration and Development Oracle Data Integrator 11g: Advanced Integration and Development Oracle Coherence 12c: Oracle Coherence 12c: New Features Oracle Coherence 12c: Share and Manage Data in Clusters Oracle Coherence 12c: Oracle GoldenGate 11g: Fundamentals for Oracle Oracle GoldenGate 11g: Fundamentals for SQL Server Oracle GoldenGate 11g Fundamentals for Oracle Oracle GoldenGate 11g Fundamentals for DB2 Oracle GoldenGate 11g Fundamentals for Teradata Oracle GoldenGate 11g Fundamentals for HP NonStop Oracle GoldenGate 11g Management Pack: Overview Oracle GoldenGate 11g Troubleshooting and Tuning Oracle GoldenGate 11g: Advanced Configuration for Oracle Other Resources: Apache Hadoop : http://hadoop.apache.org/ is the homepage for these technologies. "Hadoop Definitive Guide 3rdEdition" by Tom White is a classical lecture for people who want to know more about Hadoop , and some active "googling " will also give you some more references. About the author: Eugene Simos is based in France and joined Oracle through the BEA-Weblogic Acquisition, where he worked for the Professional Service, Support, end Education for major accounts across the EMEA Region. He worked in the banking sector, ATT, Telco companies giving him extensive experience on production environments. Eugen currently specializes in Oracle Fusion Middleware teaching an array of courses on Weblogic/Webcenter, Content,BPM /SOA/Identity-Security/GoldenGate/Virtualisation/Unified Comm Suite) throughout the EMEA region.

    Read the article

< Previous Page | 194 195 196 197 198 199 200 201 202 203 204 205  | Next Page >