Search Results

Search found 16914 results on 677 pages for 'single threaded'.

Page 149/677 | < Previous Page | 145 146 147 148 149 150 151 152 153 154 155 156  | Next Page >

  • How to test that invalid arguments raise an ArgumentError exception using RSpec?

    - by John Topley
    I'm writing a RubyGem that can raise an ArgumentError if the arguments supplied to its single method are invalid. How can I write a test for this using RSpec? The example below shows the sort of implementation I have in mind. The bar method expects a single boolean argument (:baz), the type of which is checked to make sure that it actually is a boolean: module Foo def self.bar(options = {}) baz = options.fetch(:baz, true) validate_arguments(baz) end def self.validate_arguments(baz) raise(ArgumentError, ":baz must be a boolean") unless valid_baz?(baz) end def self.valid_baz?(baz) baz.is_a?(TrueClass) || baz.is_a?(FalseClass) end end

    Read the article

  • iOS Development: Can I store an array of integers in a Core Data object without creating a new table to represent the array?

    - by BeachRunnerJoe
    Hello. I'm using Core Data and I'm trying to figure out the simplest way to store an array of integers in one of my Core Data entities. Currently, my entities contain various arrays of objects that are more complex than a single number, so it makes sense to represent those arrays as tables in my DB and attach them using relationships. If I want to store a simple array of integers, do I need to create a new table with a single column and attach it using a one-to-many relationship? Or is there a more simple way? Thanks in advance for your wisdom!

    Read the article

  • How do I specify both icons for a universal iPhone/iPad app?

    - by byamabe
    I hope to create a single app that supports both the iPhone and the iPad. The app works in the simulator for both devices as desired. Now I'm trying to build and deploy it. I set the "Icon File" in the plist to the 57x57 .png image and when I build and try to submit the app ItunesConnect complains about needing a 72x72 .png image for the iPad. If I specify the "Icon File" to the 72x72 .png ItunesConnect complains about needing a 57x57 image for the iPhone. How do I specify both icons in a single plist?

    Read the article

  • Need guidance for my first Android application: how many activities should I use?

    - by jul
    Hi, I'm starting Android doing an application for searching restaurants, and some guidance would be welcome! On the first screen I'd like to have a search field with a submit button (I get the data from a web service), and below a list with the results of the search. When clicking on one of the items of the list it will show a screen with the restaurant details as well as a map showing its location. My questions are: Can I do everything in one single activity or should I do an activity for the search, one for the result list, one for the restaurant description, and another for the map? Would doing one single activity make the application more responsive? How can I use a list and a map within a normal activity (without ListActivity and MapActivity)? Any help, pointer, example application or sample code is very appreciated! Thank you Jul

    Read the article

  • MVC Model architecture

    - by ATT
    I'm getting into CodeIgniter and trying to figure out the good architecture for my models. What kind of models would you create for the following simple example: list page of blog entries: shows part of the entry data, number of comments blog entry page: shows all the entry data, comment list (with part of the comment data) comment page: shows all the comment data I'm trying to get this right so that it's simple and effective. I don't want to load too much information (from the db) on the pages where I don't need them. E.g. should the same entry model handle both multiple entries as well as a single entry? And how should the comments be loaded? I only need the number of comments on the multiple entries (list) page but some of the comment data on the single entry page. How would you handle this?

    Read the article

  • Linq to Sql GroupJoin Paging oddity

    - by OllyA
    I have noticed a strange Sql Translation in a LinqToSql Query I was trying to optimise. If I execute the following Recipients.GroupJoin( RecipientAttributes, x => x.Recipient_Id, y => y.Recipient_Id, (x,y) => new {Recipient = x, Attributes = y}) .Skip(1) .Take(1000) It executes in a single query as expected. However Recipients.GroupJoin( RecipientAttributes, x => x.Recipient_Id, y => y.Recipient_Id, (x,y) => new {Recipient = x, Attributes = y}) .Skip(0) .Take(1000) executes in a separate query for each Attributes selection. Removing the Skip(0) makes no difference either. Can anyone explain this and is there something I can do to get the first page query executing in a single sql statement?

    Read the article

  • What do C# Table Adapters actually return?

    - by Raven Dreamer
    I'm working on an application that manipulates SQL tables in a windows form application. Up until now, I've only been using the pre-generated Fill queries, and self-made update and delete queries (which return nothing). I am interested in storing the value of a single value from a single column (an 'nchar(15)' name), and though I have easily written the SQL code to return that value to the application, I have no idea what it will be returned as. SELECT [Contact First Name] FROM ContactSkillSet WHERE [Contact ID] = @CurrentID Can the result be stored directly as a string? Do I need to cast it? Invoke a toString method?

    Read the article

  • [C++] Start a thread using a method pointer

    - by Michael
    Hi ! I'm trying to develop a thread abstraction (POSIX thread and thread from the Windows API), and I would very much like it to be able to start them with a method pointer, and not a function pointer. What I would like to do is an abstraction of thread being a class with a pure virtual method "runThread", which would be implanted in the future threaded class. I don't know yet about the Windows thread, but to start a POSIX thread, you need a function pointer, and not a method pointer. And I can't manage to find a way to associate a method with an instance so it could work as a function. I probably just can't find the keywords (and I've been searching a lot), I think it's pretty much what Boost::Bind() does, so it must exist. Can you help me ?

    Read the article

  • Loading Unmanaged C++ in C#. Error Attempted to read or write protected memory

    - by Thatoneguy
    I have a C++ function that looks like this __declspec(dllexport) int ___stdcall RegisterPerson(char const * const szName) { std::string copyName( szName ); // Assign name to a google protocol buffer object // Psuedo code follows.. Protobuf::Person person; person->set_name(copyName); // Error Occurs here... std::cerr << person->DebugString() << std::endl; } The corresponding C# code looks like this... [DllImport(@"MyLibrary.dll", SetLastError = true)] public static unsafe extern int RegisterPerson([MarshalAs(UnmanagedType.LPTStr)]string szName) Not sure why this is not working. My C++ library is compiled as Multi Threaded DLL with MultiByte encoding. Any help would be appreciated. I saw this is a common problem online but no answers lead me to a solution for my problem.

    Read the article

  • Python "string_escape" vs "unicode_escape"

    - by Mike Boers
    According to the docs, the builtin string encoding string_escape: Produce[s] a string that is suitable as string literal in Python source code ...while the unicode_escape: Produce[s] a string that is suitable as Unicode literal in Python source code So, they should have roughly the same behaviour. BUT, they appear to treat single quotes differently: >>> print """before '" \0 after""".encode('string-escape') before \'" \x00 after >>> print """before '" \0 after""".encode('unicode-escape') before '" \x00 after The string_escape escapes the single quote while the Unicode one does not. Is it safe to assume that I can simply: >>> escaped = my_string.encode('unicode-escape').replace("'", "\\'") ...and get the expected behaviour?

    Read the article

  • Learn another useful programming language [closed]

    - by Sebi
    I know this question was here a lot of times and can't be answered at all, but im not looking for a single name, but rather for an advice in my situation. I learned programming with Java and now I'm developing in Java for more or less 5 years (at the university) and I thinks my programming skills their are really ok/average. I have also small experience in C/C++ and C#. Now I have some spare time and I'd like to learn a new language or deepen the knowledge of Java/C/C++. But how to choose the right language to learn? I'd like to learn a language which will be usefull in the future concerning working in a software development business? I know there is no single answer, but I'm sure you could mention some languages that are more usefull than others.

    Read the article

  • Splitting a UL into three even lists

    - by Andy
    I am printing a menu using UL, the trouble is the order that is generated by my script is ignored because im printing the LI one after the other and they're spanning three across. So the order is 1 , 2 , 3 as opposed to 1 2 3 To counteract this i wanted to split my single UL into three that way the order would be maintained. Here is my code currently which works perfectly to print a single UL. //Category Drop Down Menu $this->CategoryDropDownMenu = '<ul id="subcatmenu">'; foreach($sitemap->CategoryMenu as $val) $this->CategoryDropDownMenu .= '<li><a href="'.$val[host].$val[link].'"><span>'.htmlspecialchars($val[title]).'</span></a></li>'; $this->CategoryDropDownMenu .= '</ul>';

    Read the article

  • Working with sets of rows in (My)SQL and comparing values

    - by Pep.
    Hello, I am trying to figure out the SQL for doing some relatively simple operations on sets of records in a table but I am stuck. Consider a table with multiple rows per item, all identified by a common key. For example: serial model color XX1 A blue XX2 A blue XX3 A green XX5 B red XX6 B blue XX1 B blue What I would for example want to do is: Assuming that all model A rows must have the same color, find the rows which dont. (for example, XX3 is green). Assuming that a given serial number can only point to a single type of model, find out the rows which that does not occur (for example XX1 points both to A and B) These are all simple logically things to do. To abstract it, I want to know how to group things by using a single key (or combination of keys) and then compare the values of those records. Should I use a join on the same table? should i use some sort of array or similar? thanks for your help

    Read the article

  • Creating Oracle stored procedure that returns data

    - by user3614327
    In Firebird you can create a stored procedure that returns data an invoke it like a table passing the arguments: create or alter procedure SEL_MAS_IVA ( PCANTIDAD double precision) returns ( CANTIDAD_CONIVA double precision) as begin CANTIDAD_CONIVA = pCANTIDAD*(1.16); suspend; end select * from SEL_MAS_IVA(100) will return a single row single column (named CANTIDAD_CONIVA) relation with the value 116 This is a very simple example. The stored procedure can of course have any number of input and output parameters and do whatever it needs to return data (including multiple rows), which is accomplished by the "suspend" statement (which as it name implies, suspends the SP execution, returns data to the caller, and resumes with the next statement) How can I create such kind of stored procedures in Oracle?

    Read the article

  • Partially fattening a list

    - by alj
    This is probably a really silly question but, given the example code at the bottom, how would I get a single list that retain the tuples? (I've looked at the itertools but it flattens everything) What I currently get is: ('id', 20, 'integer') ('companyname', 50, 'text') [('focus', 30, 'text'), ('fiesta', 30, 'text'), ('mondeo', 30, 'text'), ('puma', 30, 'text')] ('contact', 50, 'text') ('email', 50, 'text') what I would like is a single level list like: ('id', 20, 'integer') ('companyname', 50, 'text') ('focus', 30, 'text') ('fiesta', 30, 'text') ('mondeo', 30, 'text') ('puma', 30, 'text') ('contact', 50, 'text') ('email', 50, 'text') def getproducts(): temp_list=[] product_list=['focus','fiesta','mondeo','puma'] #usually this would come from a db for p in product_list: temp_list.append((p,30,'text')) return temp_list def createlist(): column_title_list = ( ("id",20,"integer"), ("companyname",50,"text"), getproducts(), ("contact",50,"text"), ("email",50,"text"), ) return column_title_list for item in createlist(): print item Thanks ALJ

    Read the article

  • Rails: updating an item along with associated items

    - by shmichael
    Suppose I have a simple to-do list application. The application contains two models: lists (have an owner and description) items (have name and due-date) that belong to a specific list I would like to have a single edit screen for a list in which I update the list attributes (such as description) and also create/delete/modify associated items. There should be a single "save" button that will commit all changes. Unless save is pressed, any change to the list and the items should be forgotten. I wasn't able to find an elegant best practice for this. Would greatly appreciate any suggestions and/or references to existing implementations.

    Read the article

  • Can I pass an array as arguments to a method with variable arguments in Java?

    - by user352382
    I'd like to be able to create a function like: class A { private String extraVar; public String myFormat(String format, Object ... args){ return String.format(format, extraVar, args); } } The problem here is that args is treated as Object[] in the method myFormat, and thus is a single argument to String.format, while I'd like every single Object in args to be passed as a new argument. Since String.format is also a method with variable arguments, this should be possible. If this is not possible, is there a method like String.format(String format, Object[] args)? In that case I could prepend extraVar to args using a new array and pass it to that method.

    Read the article

  • How to turn one server into many servers? (Virtualization/VMWare)

    - by user1229962
    I'm hoping for a high level discussion of this problem I know is quickly approaching my application. I have a server that binds on a specific port and manages TCP Sockets from my clients. I know that it is common practice to use VMWare to virtualize servers and run multiple servers at once. How can a single server design be changed to support multiple servers? Multiple servers can't bind to the same port. If I had to guess I would say a proxy server binds to the port and then sends connections off to the other servers to be handled as if it was still a single server application. I'm wondering what options there are and what are the common practices for solving this problem? Thanks in advance!

    Read the article

  • Did a recent WinXP update break CD/DVD read speeds? SP2/SP3

    - by quack quixote
    I have two systems with fresh installations of Windows XP Pro SP3 (SP3 slipstreamed into the installer; fully updated after install). One's a refurbished 2.4GHz Pentium4 system; the other is a new 1.6GHz Atom330 build. Both have brand-new dual-layer CD/DVD burners (one's a LiteOn IDE, the other an LG SATA). Both take a really looooong time to read a single-layer DVD in Windows with Cygwin tools. Specifically, 40 minutes or more. I burn backup data to single-layer DVD+/-R and use MD5 hashes for data verification (made with the standard md5sum tool in Unix or Cygwin). The hashes are burned to disc with the data files, and I use this command to verify: $ cd /path/to/disc/mountpoint ; time md5sum -c < md5.txt Here's how long that takes to run on a full single-layer DVD+/-R disc: Old system (WinXP SP2, 1.8GHz Athlon 2500+, last summer): ~10 minutes Old system (Ubuntu 9.04, 1.8GHz Athlon 2500+): ~10 minutes Old system (Debian 5, dual 550MHz P3): ~10 minutes New Pentium4 system (running Ubuntu 9.04): ~5 minutes New Pentium4 system (running WinXP SP3, file copy from Win Explorer): ~6 minutes New Atom330 system (running WinXP SP3, file copy from Win Explorer): ~6 minutes Now the weird stuff: Old system (WinXP SP2, 1.8GHz Athlon 2500+, today): ~25 minutes New Pentium4 system (running WinXP SP3, read from Cygwin): ~40-50 minutes (?!!) New Atom330 system (running WinXP SP3, read from Cygwin): ~40 minutes (can do it in ~30 minutes ...if i have another program spin up the drive first) Since both systems will copy files in 6 minutes using Windows Explorer, I know it's not a hardware problem. Windows just never spins up the drive during the Cygwin read, so it stays super-slow the whole time. Other programs like EAC and DVD Decrypter seem to spin up the disc just fine during their processing. DMA is enabled on both systems. (Can confirm in Windows' Device Manager on the Atom330, not on the P4.) Nero's DriveSpeed tool doesn't seem to have any effect. Copy times are comparable from commandline with Windows' xcopy. Copying with Cygwin's cp looks more like the problem state -- it will spin up the drive for a short time, never reaches full speed, and lets it spin back down again for most of the copy. What I need is to get full read speeds from Cygwin. Is this a known issue with SP3 or some other recent Windows update? Any other ideas? Update: More testing; Windows will spin up the drive when data is copied with Windows tools, but not when read in place or copied with Cygwin tools. It doesn't make sense to me that Windows spins up the drive for copying, but not for other reads. Might be more of a Cygwin problem? Update 2: GUI activity is sluggish during the problem state -- during the Cygwin verifies, there's a slight but noticable delay when dragging windows or icons around on the desktop, switching windows, Alt-Tabbing through open applications, opening new windows, etc. It reminds me of the delay when opening a Windows Explorer window on My Computer just after inserting a DVD. I've tried updating Cygwin (from 1.5.x to 1.7.x), but no change in the problem behavior. I've also noticed this issue occurs on WinXP SP2, but it's not exactly the same -- some spin-up occurs, so the read happens in ~25-30 minutes instead of 40+. The SP2 system used to run the verifies in ~10 minutes, and when it first changed (not sure exactly when, maybe in late November or early December 2009) I thought it was dying hardware. This is why I suspect an official update of breaking this functionality; this has worked for years on that SP2 box.

    Read the article

  • can't login to new install of SQL 2008 x64 via SSMS

    - by tpcolson
    I have performed a fresh install of SQL 2008 x64 on a fresh install of Server 2008 R2 x64 in an AD environment. Upon install completion, I cannot login to the SQL Instance via SSMS, with the following error: Login failed for user domain\user. Reason: Token-based server access validation failed with an infrastructure error. Check for previous errors. [CLIENT: ]. Background: the server is correctly joined to the AD Domain, the install was performed with defaults, windows authentication only (per organizational rules), the SQL install completes with no errors, domain\user was added as SQL Amin during setup account provisioning, I am logged into to console as domain\user when this error occurs, windows firewall is OFF, UAC is ON (an will never be turned off in accordance with organizational policy). To troubleshoot this error I have tried: Run SSMS as administrator: fail; Start SQL in single user mode, run SSMS: fail Start SQL in single user mode, run SSMS as administrator: Success Start SQL in single user mode, run SSMS as administrator, remove domain\user from sysadmin group, re-add, run SSMS: fail; Any combination and permutation of log off and log on, reboot, and chant gregorian prayers: fail; Reimage server with 2008 x64, slipstream SP2 into SQL 2008 install, all above troubleshooting steps are repeatable exactly, so I've narrowed this down to not being a SP issue; (this is NOT 2008 SQL R2) Any suggestion on how to grant management access to this fresh install of SQL 2008 via SSMS? Our organizational policy is no console access to servers, management will be done via management tools intalled on client workstations. domain\user is a group of 8 users whom will have SSMS installed on workstations. However, we can't even access SQL via SSMS from the console! We cannot deploy this in an environment where these 8 users will have to sneak into the server closet on the weekends and have console access to SQL and run SSMS as administrator. EDIT: domain\group is a replacement for the actual object; the queries indicate that domain\group does indeed have the right privelges....!?! 1> EXEC xp_logininfo 'domain\group' go account name type privilege mapped login name permission path 'domain\group' group admin 'domain\group' NULL xp_logininfo seems to show 'domain\group' in the sql admin group; 1> SELECT A.name AS 'Role', B.name AS 'Login' 3> FROM sys.server_role_members C 5> INNER JOIN sys.server_principals A ON A.principal_id = C.role_principal_id 7> INNER JOIN sys.server_principals B ON B.principal_id = C.member_principal _id 9> go Role Login sysadmin sa sysadmin NT AUTHORITY\SYSTEM sysadmin NT SERVICE\MSSQLSERVER sysadmin NT SERVICE\SQLSERVERAGENT sysadmin domain\group 1> SELECT PRINCIPAL_ID AS [Principal ID], 2> NAME AS [User], 3> TYPE_DESC AS [Type Description], 4> IS_DISABLED AS [Status] 5> FROM sys.server_principals 6> GO Principal ID User Type Description Status ------------ ------------------------------------------------------------------- ------------------------------------------------------------- ------------------ ------------------------------------------ ------ 1 sa SQL_LOGIN 1 2 public SERVER_ROLE 0 3 sysadmin SERVER_ROLE 0 4 securityadmin SERVER_ROLE 0 5 serveradmin SERVER_ROLE 0 6 setupadmin SERVER_ROLE 0 7 processadmin SERVER_ROLE 0 8 diskadmin SERVER_ROLE 0 9 dbcreator SERVER_ROLE 0 10 bulkadmin SERVER_ROLE 0 101 ##MS_SQLResourceSigningCertificate## CERTIFICATE_MAPPED _LOGIN 0 102 ##MS_SQLReplicationSigningCertificate## CERTIFICATE_MAPPED _LOGIN 0 103 ##MS_SQLAuthenticatorCertificate## CERTIFICATE_MAPPED _LOGIN 0 105 ##MS_PolicySigningCertificate## CERTIFICATE_MAPPED _LOGIN 0 257 ##MS_PolicyTsqlExecutionLogin## SQL_LOGIN 1 259 NT AUTHORITY\SYSTEM WINDOWS_LOGIN 0 260 NT SERVICE\MSSQLSERVER WINDOWS_GROUP 0 262 NT SERVICE\SQLSERVERAGENT WINDOWS_GROUP 0 263 ##MS_PolicyEventProcessingLogin## SQL_LOGIN 1 264 ##MS_AgentSigningCertificate## CERTIFICATE_MAPPED _LOGIN 0 265 domain\group WINDOWS_GROUP 0 (21 rows affected)

    Read the article

  • Command-line video editing in Linux (cut, join and preview)

    - by sdaau
    I have rather simple editing needs - I need to cut up some videos, maybe insert some PNGs in between them, and join these videos (don't need transitions, effects, etc.). Basically, pitivi would do what I want - except, I use 640x480 30 fps AVI's from a camera, and as soon as I put in over a couple of minutes of that kind of material, pitivi starts freezing on preview, and thus becomes unusable. So, I started looking for a command line tool for Linux; I guess only ffmpeg (command line - Using ffmpeg to cut up video - Super User) and mplayer (Sam - Edit video file with mencoder under linux) are so far candidates, but I cannot find examples of the use I have in mind.   Basically, I'd imagine there's an encoder and player tools (like ffmpeg vs ffplay; or mencoder vs mplayer) - such that, to begin with, the edit sequence could be specified directly on the command line, preferably with frame resolution - a pseudocode would look like: videnctool -compose --file=vid1.avi --start=00:00:30:12 --end=00:01:45:00 --file=vid2.avi --start=00:05:00:00 --end=00:07:12:25 --file=mypicture.png --duration=00:00:02:00 --file=vid3.avi --start=00:02:00:00 --end=00:02:45:10 --output=editedvid.avi ... or, it could have a "playlist" text file, like: vid1.avi 00:00:30:12 00:01:45:00 vid2.avi 00:05:00:00 00:07:12:25 mypicture.png - 00:00:02:00 vid3.avi 00:02:00:00 00:02:45:10 ... so it could be called with videnctool -compose --playlist=playlist.txt --output=editedvid.avi The idea here would be that all of the videos are in the same format - allowing the tool to avoid transcoding, and just do a "raw copy" instead (as in mencoder's copy codec: "-oac copy -ovc copy") - or in lack of that, uncompressed audio/video would be OK (although it would eat a bit of space). In the case of the still image, the tool would use the encoding set by the video files.   The thing is, I can so far see that mencoder and ffmpeg can operate on individual files; e.g. cut a single section from a single file, or join files (mencoder also has Edit Decision Lists (EDL), which can be used to do frame-exact cutting - so you can define multiple cut regions, but it's again attributed to a single file). Which implies I have to work on cutting pieces first from individual files first (each of which would demand own temporary file on disk), and then joining them in a final video file. I would then imagine, that there is a corresponding player tool, which can read the same command line option format / playlist file as the encoding tool - except it will not generate an output file, but instead play the video; e.g. in pseudocode: vidplaytool --playlist=playlist.txt --start=00:01:14 --end=00:03:13 ... and, given there's enough memory, it would generate a low-res video preview in RAM, and play it back in a window, while offering some limited interaction ( like mplayer's keyboard shortcuts for play, pause, rewind, step frame). Of course, I'd imagine the start and end times to refer to the entire playlist, and include any file that may end up in that region in the playlist. Thus, the end result of all this would be: command line operation; no temporary files while doing the editing - and also no temporary files (nor transcoding) when rendering final output... which I myself think would be nice. So, while I think that all of the above may be a bit of a stretch - does there exist anything that would approximate the workflow described above?

    Read the article

  • Microsoft and jQuery

    - by Rick Strahl
    The jQuery JavaScript library has been steadily getting more popular and with recent developments from Microsoft, jQuery is also getting ever more exposure on the ASP.NET platform including now directly from Microsoft. jQuery is a light weight, open source DOM manipulation library for JavaScript that has changed how many developers think about JavaScript. You can download it and find more information on jQuery on www.jquery.com. For me jQuery has had a huge impact on how I develop Web applications and was probably the main reason I went from dreading to do JavaScript development to actually looking forward to implementing client side JavaScript functionality. It has also had a profound impact on my JavaScript skill level for me by seeing how the library accomplishes things (and often reviewing the terse but excellent source code). jQuery made an uncomfortable development platform (JavaScript + DOM) a joy to work on. Although jQuery is by no means the only JavaScript library out there, its ease of use, small size, huge community of plug-ins and pure usefulness has made it easily the most popular JavaScript library available today. As a long time jQuery user, I’ve been excited to see the developments from Microsoft that are bringing jQuery to more ASP.NET developers and providing more integration with jQuery for ASP.NET’s core features rather than relying on the ASP.NET AJAX library. Microsoft and jQuery – making Friends jQuery is an open source project but in the last couple of years Microsoft has really thrown its weight behind supporting this open source library as a supported component on the Microsoft platform. When I say supported I literally mean supported: Microsoft now offers actual tech support for jQuery as part of their Product Support Services (PSS) as jQuery integration has become part of several of the ASP.NET toolkits and ships in several of the default Web project templates in Visual Studio 2010. The ASP.NET MVC 3 framework (still in Beta) also uses jQuery for a variety of client side support features including client side validation and we can look forward toward more integration of client side functionality via jQuery in both MVC and WebForms in the future. In other words jQuery is becoming an optional but included component of the ASP.NET platform. PSS support means that support staff will answer jQuery related support questions as part of any support incidents related to ASP.NET which provides some piece of mind to some corporate development shops that require end to end support from Microsoft. In addition to including jQuery and supporting it, Microsoft has also been getting involved in providing development resources for extending jQuery’s functionality via plug-ins. Microsoft’s last version of the Microsoft Ajax Library – which is the successor to the native ASP.NET AJAX Library – included some really cool functionality for client templates, databinding and localization. As it turns out Microsoft has rebuilt most of that functionality using jQuery as the base API and provided jQuery plug-ins of these components. Very recently these three plug-ins were submitted and have been approved for inclusion in the official jQuery plug-in repository and been taken over by the jQuery team for further improvements and maintenance. Even more surprising: The jQuery-templates component has actually been approved for inclusion in the next major update of the jQuery core in jQuery V1.5, which means it will become a native feature that doesn’t require additional script files to be loaded. Imagine this – an open source contribution from Microsoft that has been accepted into a major open source project for a core feature improvement. Microsoft has come a long way indeed! What the Microsoft Involvement with jQuery means to you For Microsoft jQuery support is a strategic decision that affects their direction in client side development, but nothing stopped you from using jQuery in your applications prior to Microsoft’s official backing and in fact a large chunk of developers did so readily prior to Microsoft’s announcement. Official support from Microsoft brings a few benefits to developers however. jQuery support in Visual Studio 2010 means built-in support for jQuery IntelliSense, automatically added jQuery scripts in many projects types and a common base for client side functionality that actually uses what most developers are already using. If you have already been using jQuery and were worried about straying from the Microsoft line and their internal Microsoft Ajax Library – worry no more. With official support and the change in direction towards jQuery Microsoft is now following along what most in the ASP.NET community had already been doing by using jQuery, which is likely the reason for Microsoft’s shift in direction in the first place. ASP.NET AJAX and the Microsoft AJAX Library weren’t bad technology – there was tons of useful functionality buried in these libraries. However, these libraries never got off the ground, mainly because early incarnations were squarely aimed at control/component developers rather than application developers. For all the functionality that these controls provided for control developers they lacked in useful and easily usable application developer functionality that was easily accessible in day to day client side development. The result was that even though Microsoft shipped support for these tools in the box (in .NET 3.5 and 4.0), other than for the internal support in ASP.NET for things like the UpdatePanel and the ASP.NET AJAX Control Toolkit as well as some third party vendors, the Microsoft client libraries were largely ignored by the developer community opening the door for other client side solutions. Microsoft seems to be acknowledging developer choice in this case: Many more developers were going down the jQuery path rather than using the Microsoft built libraries and there seems to be little sense in continuing development of a technology that largely goes unused by the majority of developers. Kudos for Microsoft for recognizing this and gracefully changing directions. Note that even though there will be no further development in the Microsoft client libraries they will continue to be supported so if you’re using them in your applications there’s no reason to start running for the exit in a panic and start re-writing everything with jQuery. Although that might be a reasonable choice in some cases, jQuery and the Microsoft libraries work well side by side so that you can leave existing solutions untouched even as you enhance them with jQuery. The Microsoft jQuery Plug-ins – Solid Core Features One of the most interesting developments in Microsoft’s embracing of jQuery is that Microsoft has started contributing to jQuery via standard mechanism set for jQuery developers: By submitting plug-ins. Microsoft took some of the nicest new features of the unpublished Microsoft Ajax Client Library and re-wrote these components for jQuery and then submitted them as plug-ins to the jQuery plug-in repository. Accepted plug-ins get taken over by the jQuery team and that’s exactly what happened with the three plug-ins submitted by Microsoft with the templating plug-in even getting slated to be published as part of the jQuery core in the next major release (1.5). The following plug-ins are provided by Microsoft: jQuery Templates – a client side template rendering engine jQuery Data Link – a client side databinder that can synchronize changes without code jQuery Globalization – provides formatting and conversion features for dates and numbers The first two are ports of functionality that was slated for the Microsoft Ajax Library while functionality for the globalization library provides functionality that was already found in the original ASP.NET AJAX library. To me all three plug-ins address a pressing need in client side applications and provide functionality I’ve previously used in other incarnations, but with more complete implementations. Let’s take a close look at these plug-ins. jQuery Templates http://api.jquery.com/category/plugins/templates/ Client side templating is a key component for building rich JavaScript applications in the browser. Templating on the client lets you avoid from manually creating markup by creating DOM nodes and injecting them individually into the document via code. Rather you can create markup templates – similar to the way you create classic ASP server markup – and merge data into these templates to render HTML which you can then inject into the document or replace existing content with. Output from templates are rendered as a jQuery matched set and can then be easily inserted into the document as needed. Templating is key to minimize client side code and reduce repeated code for rendering logic. Instead a single template can be used in many places for updating and adding content to existing pages. Further if you build pure AJAX interfaces that rely entirely on client rendering of the initial page content, templates allow you to a use a single markup template to handle all rendering of each specific HTML section/element. I’ve used a number of different client rendering template engines with jQuery in the past including jTemplates (a PHP style templating engine) and a modified version of John Resig’s MicroTemplating engine which I built into my own set of libraries because it’s such a commonly used feature in my client side applications. jQuery templates adds a much richer templating model that allows for sub-templates and access to the data items. Like John Resig’s original Micro Template engine, the core basics of the templating engine create JavaScript code which means that templates can include JavaScript code. To give you a basic idea of how templates work imagine I have an application that downloads a set of stock quotes based on a symbol list then displays them in the document. To do this you can create an ‘item’ template that describes how each of the quotes is renderd as a template inside of the document: <script id="stockTemplate" type="text/x-jquery-tmpl"> <div id="divStockQuote" class="errordisplay" style="width: 500px;"> <div class="label">Company:</div><div><b>${Company}(${Symbol})</b></div> <div class="label">Last Price:</div><div>${LastPrice}</div> <div class="label">Net Change:</div><div> {{if NetChange > 0}} <b style="color:green" >${NetChange}</b> {{else}} <b style="color:red" >${NetChange}</b> {{/if}} </div> <div class="label">Last Update:</div><div>${LastQuoteTimeString}</div> </div> </script> The ‘template’ is little more than HTML with some markup expressions inside of it that define the template language. Notice the embedded ${} expressions which reference data from the quote objects returned from an AJAX call on the server. You can embed any JavaScript or value expression in these template expressions. There are also a number of structural commands like {{if}} and {{each}} that provide for rudimentary logic inside of your templates as well as commands ({{tmpl}} and {{wrap}}) for nesting templates. You can find more about the full set of markup expressions available in the documentation. To load up this data you can use code like the following: <script type="text/javascript"> //var Proxy = new ServiceProxy("../PageMethods/PageMethodsService.asmx/"); $(document).ready(function () { $("#btnGetQuotes").click(GetQuotes); }); function GetQuotes() { var symbols = $("#txtSymbols").val().split(","); $.ajax({ url: "../PageMethods/PageMethodsService.asmx/GetStockQuotes", data: JSON.stringify({ symbols: symbols }), // parameter map type: "POST", // data has to be POSTed contentType: "application/json", timeout: 10000, dataType: "json", success: function (result) { var quotes = result.d; var jEl = $("#stockTemplate").tmpl(quotes); $("#quoteDisplay").empty().append(jEl); }, error: function (xhr, status) { alert(status + "\r\n" + xhr.responseText); } }); }; </script> In this case an ASMX AJAX service is called to retrieve the stock quotes. The service returns an array of quote objects. The result is returned as an object with the .d property (in Microsoft service style) that returns the actual array of quotes. The template is applied with: var jEl = $("#stockTemplate").tmpl(quotes); which selects the template script tag and uses the .tmpl() function to apply the data to it. The result is a jQuery matched set of elements that can then be appended to the quote display element in the page. The template is merged against an array in this example. When the result is an array the template is automatically applied to each each array item. If you pass a single data item – like say a stock quote – the template works exactly the same way but is applied only once. Templates also have access to a $data item which provides the current data item and information about the tempalte that is currently executing. This makes it possible to keep context within the context of the template itself and also to pass context from a parent template to a child template which is very powerful. Templates can be evaluated by using the template selector and calling the .tmpl() function on the jQuery matched set as shown above or you can use the static $.tmpl() function to provide a template as a string. This allows you to dynamically create templates in code or – more likely – to load templates from the server via AJAX calls. In short there are options The above shows off some of the basics, but there’s much for functionality available in the template engine. Check the documentation link for more information and links to additional examples. The plug-in download also comes with a number of examples that demonstrate functionality. jQuery templates will become a native component in jQuery Core 1.5, so it’s definitely worthwhile checking out the engine today and get familiar with this interface. As much as I’m stoked about templating becoming part of the jQuery core because it’s such an integral part of many applications, there are also a couple shortcomings in the current incarnation: Lack of Error Handling Currently if you embed an expression that is invalid it’s simply not rendered. There’s no error rendered into the template nor do the various  template functions throw errors which leaves finding of bugs as a runtime exercise. I would like some mechanism – optional if possible – to be able to get error info of what is failing in a template when it’s rendered. No String Output Templates are always rendered into a jQuery matched set and there’s no way that I can see to directly render to a string. String output can be useful for debugging as well as opening up templating for creating non-HTML string output. Limited JavaScript Access Unlike John Resig’s original MicroTemplating Engine which was entirely based on JavaScript code generation these templates are limited to a few structured commands that can ‘execute’. There’s no code execution inside of script code which means you’re limited to calling expressions available in global objects or the data item passed in. This may or may not be a big deal depending on the complexity of your template logic. Error handling has been discussed quite a bit and it’s likely there will be some solution to that particualar issue by the time jQuery templates ship. The others are relatively minor issues but something to think about anyway. jQuery Data Link http://api.jquery.com/category/plugins/data-link/ jQuery Data Link provides the ability to do two-way data binding between input controls and an underlying object’s properties. The typical scenario is linking a textbox to a property of an object and have the object updated when the text in the textbox is changed and have the textbox change when the value in the object or the entire object changes. The plug-in also supports converter functions that can be applied to provide the conversion logic from string to some other value typically necessary for mapping things like textbox string input to say a number property and potentially applying additional formatting and calculations. In theory this sounds great, however in reality this plug-in has some serious usability issues. Using the plug-in you can do things like the following to bind data: person = { firstName: "rick", lastName: "strahl"}; $(document).ready( function() { // provide for two-way linking of inputs $("form").link(person); // bind to non-input elements explicitly $("#objFirst").link(person, { firstName: { name: "objFirst", convertBack: function (value, source, target) { $(target).text(value); } } }); $("#objLast").link(person, { lastName: { name: "objLast", convertBack: function (value, source, target) { $(target).text(value); } } }); }); This code hooks up two-way linking between a couple of textboxes on the page and the person object. The first line in the .ready() handler provides mapping of object to form field with the same field names as properties on the object. Note that .link() does NOT bind items into the textboxes when you call .link() – changes are mapped only when values change and you move out of the field. Strike one. The two following commands allow manual binding of values to specific DOM elements which is effectively a one-way bind. You specify the object and a then an explicit mapping where name is an ID in the document. The converter is required to explicitly assign the value to the element. Strike two. You can also detect changes to the underlying object and cause updates to the input elements bound. Unfortunately the syntax to do this is not very natural as you have to rely on the jQuery data object. To update an object’s properties and get change notification looks like this: function updateFirstName() { $(person).data("firstName", person.firstName + " (code updated)"); } This works fine in causing any linked fields to be updated. In the bindings above both the firstName input field and objFirst DOM element gets updated. But the syntax requires you to use a jQuery .data() call for each property change to ensure that the changes are tracked properly. Really? Sure you’re binding through multiple layers of abstraction now but how is that better than just manually assigning values? The code savings (if any) are going to be minimal. As much as I would like to have a WPF/Silverlight/Observable-like binding mechanism in client script, this plug-in doesn’t help much towards that goal in its current incarnation. While you can bind values, the ‘binder’ is too limited to be really useful. If initial values can’t be assigned from the mappings you’re going to end up duplicating work loading the data using some other mechanism. There’s no easy way to re-bind data with a different object altogether since updates trigger only through the .data members. Finally, any non-input elements have to be bound via code that’s fairly verbose and frankly may be more voluminous than what you might write by hand for manual binding and unbinding. Two way binding can be very useful but it has to be easy and most importantly natural. If it’s more work to hook up a binding than writing a couple of lines to do binding/unbinding this sort of thing helps very little in most scenarios. In talking to some of the developers the feature set for Data Link is not complete and they are still soliciting input for features and functionality. If you have ideas on how you want this feature to be more useful get involved and post your recommendations. As it stands, it looks to me like this component needs a lot of love to become useful. For this component to really provide value, bindings need to be able to be refreshed easily and work at the object level, not just the property level. It seems to me we would be much better served by a model binder object that can perform these binding/unbinding tasks in bulk rather than a tool where each link has to be mapped first. I also find the choice of creating a jQuery plug-in questionable – it seems a standalone object – albeit one that relies on the jQuery library – would provide a more intuitive interface than the current forcing of options onto a plug-in style interface. Out of the three Microsoft created components this is by far the least useful and least polished implementation at this point. jQuery Globalization http://github.com/jquery/jquery-global Globalization in JavaScript applications often gets short shrift and part of the reason for this is that natively in JavaScript there’s little support for formatting and parsing of numbers and dates. There are a number of JavaScript libraries out there that provide some support for globalization, but most are limited to a particular portion of globalization. As .NET developers we’re fairly spoiled by the richness of APIs provided in the framework and when dealing with client development one really notices the lack of these features. While you may not necessarily need to localize your application the globalization plug-in also helps with some basic tasks for non-localized applications: Dealing with formatting and parsing of dates and time values. Dates in particular are problematic in JavaScript as there are no formatters whatsoever except the .toString() method which outputs a verbose and next to useless long string. With the globalization plug-in you get a good chunk of the formatting and parsing functionality that the .NET framework provides on the server. You can write code like the following for example to format numbers and dates: var date = new Date(); var output = $.format(date, "MMM. dd, yy") + "\r\n" + $.format(date, "d") + "\r\n" + // 10/25/2010 $.format(1222.32213, "N2") + "\r\n" + $.format(1222.33, "c") + "\r\n"; alert(output); This becomes even more useful if you combine it with templates which can also include any JavaScript expressions. Assuming the globalization plug-in is loaded you can create template expressions that use the $.format function. Here’s the template I used earlier for the stock quote again with a couple of formats applied: <script id="stockTemplate" type="text/x-jquery-tmpl"> <div id="divStockQuote" class="errordisplay" style="width: 500px;"> <div class="label">Company:</div><div><b>${Company}(${Symbol})</b></div> <div class="label">Last Price:</div> <div>${$.format(LastPrice,"N2")}</div> <div class="label">Net Change:</div><div> {{if NetChange > 0}} <b style="color:green" >${NetChange}</b> {{else}} <b style="color:red" >${NetChange}</b> {{/if}} </div> <div class="label">Last Update:</div> <div>${$.format(LastQuoteTime,"MMM dd, yyyy")}</div> </div> </script> There are also parsing methods that can parse dates and numbers from strings into numbers easily: alert($.parseDate("25.10.2010")); alert($.parseInt("12.222")); // de-DE uses . for thousands separators As you can see culture specific options are taken into account when parsing. The globalization plugin provides rich support for a variety of locales: Get a list of all available cultures Query cultures for culture items (like currency symbol, separators etc.) Localized string names for all calendar related items (days of week, months) Generated off of .NET’s supported locales In short you get much of the same functionality that you already might be using in .NET on the server side. The plugin includes a huge number of locales and an Globalization.all.min.js file that contains the text defaults for each of these locales as well as small locale specific script files that define each of the locale specific settings. It’s highly recommended that you NOT use the huge globalization file that includes all locales, but rather add script references to only those languages you explicitly care about. Overall this plug-in is a welcome helper. Even if you use it with a single locale (like en-US) and do no other localization, you’ll gain solid support for number and date formatting which is a vital feature of many applications. Changes for Microsoft It’s good to see Microsoft coming out of its shell and away from the ‘not-built-here’ mentality that has been so pervasive in the past. It’s especially good to see it applied to jQuery – a technology that has stood in drastic contrast to Microsoft’s own internal efforts in terms of design, usage model and… popularity. It’s great to see that Microsoft is paying attention to what customers prefer to use and supporting the customer sentiment – even if it meant drastically changing course of policy and moving into a more open and sharing environment in the process. The additional jQuery support that has been introduced in the last two years certainly has made lives easier for many developers on the ASP.NET platform. It’s also nice to see Microsoft submitting proposals through the standard jQuery process of plug-ins and getting accepted for various very useful projects. Certainly the jQuery Templates plug-in is going to be very useful to many especially since it will be baked into the jQuery core in jQuery 1.5. I hope we see more of this type of involvement from Microsoft in the future. Kudos!© Rick Strahl, West Wind Technologies, 2005-2010Posted in jQuery  ASP.NET  

    Read the article

  • Multi-tenant ASP.NET MVC – Introduction

    - by zowens
    I’ve read a few different blogs that talk about multi-tenancy and how to resolve some of the issues surrounding multi-tenancy. What I’ve come to realize is that these implementations overcomplicate the issues and give only a muddy implementation! I’ve seen some really illogical code out there. I have recently been building a multi-tenancy framework for internal use at eagleenvision.net. Through this process, I’ve realized a few different techniques to make building multi-tenant applications actually quite easy. I will be posting a few different entries over the issue and my personal implementation. In this first post, I will discuss what multi-tenancy means and how my implementation will be structured.   So what’s the problem? Here’s the deal. Multi-tenancy is basically a technique of code-reuse of web application code. A multi-tenant application is an application that runs a single instance for multiple clients. Here the “client” is different URL bindings on IIS using ASP.NET MVC. The problem with different instances of the, essentially, same application is that you have to spin up different instances of ASP.NET. As the number of running instances of ASP.NET grows, so does the memory footprint of IIS. Stack Exchange shifted its architecture to multi-tenancy March. As the blog post explains, multi-tenancy saves cost in terms of memory utilization and physical disc storage. If you use the same code base for many applications, multi-tenancy just makes sense. You’ll reduce the amount of work it takes to synchronize the site implementations and you’ll thank your lucky stars later for choosing to use one application for multiple sites. Multi-tenancy allows the freedom of extensibility while relying on some pre-built code.   You’d think this would be simple. I have actually seen a real lack of reference material on the subject in terms of ASP.NET MVC. This is somewhat surprising given the number of users of ASP.NET MVC. However, I will certainly fill the void ;). Implementing a multi-tenant application takes a little thinking. It’s not straight-forward because the possibilities of implementation are endless. I have yet to see a great implementation of a multi-tenant MVC application. The only one that comes close to what I have in mind is Rob Ashton’s implementation (all the entries are listed on this page). There’s some really nasty code in there… something I’d really like to avoid. He has also written a library (MvcEx) that attempts to aid multi-tenant development. This code is even worse, in my honest opinion. Once I start seeing Reflection.Emit, I have to assume the worst :) In all seriousness, if his implementation makes sense to you, use it! It’s a fine implementation that should be given a look. At least look at the code. I will reference MvcEx going forward as a comparison to my implementation. I will explain why my approach differs from MvcEx and how it is better or worse (hopefully better).   Core Goals of my Multi-Tenant Implementation The first, and foremost, goal is to use Inversion of Control containers to my advantage. As you will see throughout this series, I pass around containers quite frequently and rely on their use heavily. I will be using StructureMap in my implementation. However, you could probably use your favorite IoC tool instead. <RANT> However, please don’t be stupid and abstract your IoC tool. Each IoC is powerful and by abstracting the capabilities, you’re doing yourself a real disservice. Who in the world swaps out IoC tools…? No one!</RANT> (It had to be said.) I will outline some of the goodness of StructureMap as we go along. This is really an invaluable tool in my tool belt and simple to use in my multi-tenant implementation. The second core goal is to represent a tenant as easily as possible. Just as a dependency container will be a first-class citizen, so will a tenant. This allows us to easily extend and use tenants. This will also allow different ways of “plugging in” tenants into your application. In my implementation, there will be a single dependency container for a single tenant. This will enable isolation of the dependencies of the tenant. The third goal is to use composition as a means to delegate “core” functions out to the tenant. More on this later.   Features In MvcExt, “Modules” are a code element of the infrastructure. I have simplified this concept and have named this “Features”. A feature is a simple element of an application. Controllers can be specified to have a feature and actions can have “sub features”. Each tenant can select features it needs and the other features will be hidden to the tenant’s users. My implementation doesn’t require something to be a feature. A controller can be common to all tenants. For example, (as you will see) I have a “Content” controller that will return the CSS, Javascript and Images for a tenant. This is common logic to all tenants and shouldn’t be hidden or considered a “feature”; Content is a core component.   Up next My next post will be all about the code. I will reveal some of the foundation to the way I do multi-tenancy. I will have posts dedicated to Foundation, Controllers, Views, Caching, Content and how to setup the tenants. Each post will be in-depth about the issues and implementation details, while adhering to my core goals outlined in this post. As always, comment with questions of DM me on twitter or send me an email.

    Read the article

  • Content Management for WebCenter Installation Guide

    - by Gary Niu
    Overvew As we known, there are two way to install Content Management for WebCenter. One way is install it by WebCenter installer wizard, another way is to install it use their own installer. This guide is for the later one. For SSO purpose, I also mentioned how to config OID identity store for Content Management for WebCenter. Content Management for WebCenter( 10.1.3.5.1) Oracle Enterprise Linux R5U4 Basic Installation -bash-3.2$ ./setup.sh Please select your locale from the list.           1. Chinese-Simplified           2. Chinese-Traditional           3. Deutsch          *4. English-US           5. English-UK           6. Español           7. Français           8. Italiano           9. Japanese          10. Korean          11. Nederlands          12. Português-Brazil Choice? Throughout the install, when entering a text value, you can press Enter to accept the default that appears between square brackets ([]). When selecting from a list, you can select the choice followed by an asterisk by pressing Enter. Select installation type from the list.         *1. Install new server          2. Update a server Choice? Content Server Installation Directory Please enter the full pathname to the installation directory. Content Server Core Folder [/oracle/ucm/server]:/opt/oracle/ucm/server Create Directory         *1. yes          2. no Choice? Java virtual machine         *1. Sun Java 1.5.0_11 JDK          2. Specify a custom Java virtual machine Choice? Installing with Java version 1.5.0_11. Enter the location of the native file repository. This directory contains the native files checked in by contributors. Content Server Native Vault Folder [/opt/oracle/ucm/server/vault/]: Create Directory         *1. yes          2. no Choice? Enter the location of the web-viewable file repository. This directory contains files that can be accessed through the web server. Content Server Weblayout Folder [/opt/oracle/ucm/server/weblayout/]: Create Directory         *1. yes          2. no Choice? This server can be configured to manage its own authentication or to allow another master to act as an authentication proxy. Configure this server as a master or proxied server.         *1. Configure as a master server.          2. Configure as server proxied by a local master server. Choice? During installation, an admin server can be installed and configured to manage this server. If there is already an admin server on this system, you can have the installer configure it to administrate this server instead. Select admin server configuration.         *1. Install an admin server to manage this server.          2. Configure an existing admin server to manage this server.          3. Don't configure an admin server. Choice? Enter the location of an executable to start your web browser. This browser will be used to display the online help. Web Browser Path [/usr/bin/firefox]: Content Server System locale           1. Chinese-Simplified           2. Chinese-Traditional           3. Deutsch          *4. English-US           5. English-UK           6. Español           7. Français           8. Italiano           9. Japanese          10. Korean          11. Nederlands          12. Português-Brazil Choice? Please select the region for your timezone from the list.         *1. Use the timezone setting for your operating system          2. Pacific          3. America          4. Atlantic          5. Europe          6. Africa          7. Asia          8. Indian          9. Australia Choice? Please enter the port number that will be used to connect to the Content Server. This port must be otherwise unused. Content Server Port [4444]: Please enter the port number that will be used to connect to the Admin Server. This port must be otherwise unused. Admin Server Port [4440]: Enter a security filter for the server port. Hosts which are allowed to communicate directly with the server port may access any resources managed by the server. Insure that hosts which need access are included in the filter. See the installation guide for more details. Incoming connection address filter [127.0.0.1]:*.*.*.* *** Content Server URL Prefix The URL prefix specified here is used when generating HTML pages that refer to the contents of the weblayout directory within the installation. This prefix must be mapped in the web server Additional Document Directories section of the Content Management administration menu to the physical location of the weblayout directory. For example, "/idc/" would be used in your installation to refer to the URL http://ucm.company.com/idc which would be mapped in the web server to the physical location /oracle/ucm/server/weblayout. Web Server Relative Root [/idc/]: Enter the name of the local mail server. The server will contact this system to deliver email. Company Mail Server [mail]: Enter the e-mail address for the system administrator. Administrator E-Mail Address [sysadmin@mail]: *** Web Server Address Many generated HTML pages refer to the web server you are using. The address specified here will be used when generating those pages. The address should include the host and domain name in most cases. If your webserver is running on a port other than 80, append a colon and the port number. Examples: www.company.com, ucm.company.com:90 Web Server HTTP Address [yekki]:yekki.cn.oracle.com:7777 Enter the name for this instance. This name should be unique across your entire enterprise. It may not contain characters other than letters, numbers, and underscores. Server Instance Name [idc]: Enter a short label for this instance. This label is used on web pages to identify this instance. It should be less than 12 characters long. Server Instance Label [idc]: Enter a long description for this instance. Server Description [Content Server idc]: Web Server         *1. Apache          2. Sun ONE          3. Configure manually Choice? Please select a database from the list below to use with the Content Server. Content Server Database         *1. Oracle          2. Microsoft SQL Server 2005          3. Microsoft SQL Server 2000          4. Sybase          5. DB2          6. Custom JDBC settings          7. Skip database configuration Choice? Manually configure JDBC settings for this database          1. yes         *2. no Choice? Oracle Server Hostname [localhost]: Oracle Listener Port Number [1521]: *** Database User ID The user name is used to log into the database used by the content server. Oracle User [user]:YEKKI_OCSERVER *** Database Password The password is used to log into the database used by the content server. Oracle Password []:oracle Oracle Instance Name [ORACLE]:orcl Configure the JVM to find the JDBC driver in a specific jar file          1. yes         *2. no Choice? The installer can attempt to create the database tables or you can manually create them. If you choose to manually create the tables, you should create them now. Attempt to create database tables          1. yes         *2. no Choice? Select components to install.          1. ContentFolios: Collect related items in folios          2. Folders_g: Organize content into hierarchical folders          3. LinkManager8: Hypertext link management support          4. OracleTextSearch: External Oracle 11g database as search indexer support          5. ThreadedDiscussions: Threaded discussion management Enter numbers separated by commas to toggle, 0 to unselect all, F to finish: 1,2,3,4,5         *1. ContentFolios: Collect related items in folios         *2. Folders_g: Organize content into hierarchical folders         *3. LinkManager8: Hypertext link management support         *4. OracleTextSearch: External Oracle 11g database as search indexer support         *5. ThreadedDiscussions: Threaded discussion management Enter numbers separated by commas to toggle, 0 to unselect all, F to finish: F Checking configuration. . . Configuration OK. Review install settings. . . Content Server Core Folder: /opt/oracle/ucm/server Java virtual machine: Sun Java 1.5.0_11 JDK Content Server Native Vault Folder: /opt/oracle/ucm/server/vault/ Content Server Weblayout Folder: /opt/oracle/ucm/server/weblayout/ Proxy authentication through another server: no Install admin server: yes Web Browser Path: /usr/bin/firefox Content Server System locale: English-US Content Server Port: 4444 Admin Server Port: 4440 Incoming connection address filter: *.*.*.* Web Server Relative Root: /idc/ Company Mail Server: mail Administrator E-Mail Address: sysadmin@mail Web Server HTTP Address: yekki.cn.oracle.com:7777 Server Instance Name: idc Server Instance Label: idc Server Description: Content Server idc Web Server: Apache Content Server Database: Oracle Manually configure JDBC settings for this database: false Oracle Server Hostname: localhost Oracle Listener Port Number: 1521 Oracle User: YEKKI_OCSERVER Oracle Password: 6GP1gBgzSyKa4JW10U8UqqPznr/lzkNn/Ojf6M8GJ8I= Oracle Instance Name: orcl Configure the JVM to find the JDBC driver in a specific jar file: false Attempt to create database tables: no Components: ContentFolios,Folders_g,LinkManager8,OracleTextSearch,ThreadedDiscussions Proceed with install         *1. Proceed          2. Change configuration          3. Recheck the configuration          4. Abort installation Choice? Finished install type Install with warnings at 4/2/10 12:32 AM. Run Scripts -bash-3.2$ ./wc_contentserverconfig.sh /opt/oracle/ucm/server /mnt/hgfs/SOFTWARE/ofm_ucm_generic_10.1.3.5.1_disk1_1of1/ContentServer/webcenter-conf Installing '/mnt/hgfs/SOFTWARE/ofm_ucm_generic_10.1.3.5.1_disk1_1of1/ContentServer/webcenter-conf/CS10gR35UpdateBundle.zip' Service 'DELETE_DOC' Extended Service 'DELETE_BYREV_REVISION' Extended Installing '/mnt/hgfs/SOFTWARE/ofm_ucm_generic_10.1.3.5.1_disk1_1of1/ContentServer/webcenter-conf/ContentAccess/ContentAccess-linux.zip' (internal)      04.02 00:40:38.019      main    updateDocMetaDefinitionV11: adding decimal column Installing '/opt/oracle/ucm/server/custom/CS10gR35UpdateBundle/extras/Folders_g.zip' Installing '/opt/oracle/ucm/server/custom/CS10gR35UpdateBundle/extras/FusionLibraries.zip' Installing '/opt/oracle/ucm/server/custom/CS10gR35UpdateBundle/extras/JpsUserProvider.zip' Installing '/mnt/hgfs/SOFTWARE/ofm_ucm_generic_10.1.3.5.1_disk1_1of1/ContentServer/webcenter-conf/WcConfigure.zip' Apr 2, 2010 12:41:24 AM oracle.security.jps.internal.core.util.JpsConfigUtil getPasswordCredential WARNING: A password credential is expected; instead found . Apr 2, 2010 12:41:24 AM oracle.security.jps.internal.idstore.util.IdentityStoreUtil getUnamePwdFromCredStore WARNING: The credential with map JPS and key ldap.credential does not exist. Apr 2, 2010 12:41:27 AM oracle.security.jps.internal.core.util.JpsConfigUtil getPasswordCredential WARNING: A password credential is expected; instead found . Apr 2, 2010 12:41:27 AM oracle.security.jps.internal.idstore.util.IdentityStoreUtil getUnamePwdFromCredStore WARNING: The credential with map JPS and key ldap.credential does not exist. Apr 2, 2010 12:41:28 AM oracle.security.jps.internal.core.util.JpsConfigUtil getPasswordCredential WARNING: A password credential is expected; instead found . Apr 2, 2010 12:41:28 AM oracle.security.jps.internal.idstore.util.IdentityStoreUtil getUnamePwdFromCredStore WARNING: The credential with map JPS and key ldap.credential does not exist. Restart Content Server to apply updates. Configuring Apache Web Server append the following lines at httpd.conf: include "/opt/oracle/ucm/server/data/users/apache22/apache.conf" Configuring the Identity Store( Optional ) 1.  Stop Oracle Content Server and the Admin Server 2.  Update the Oracle Content Server's JPS configuration file, jps-config.xml: a. add a service instance <serviceInstance provider="idstore.ldap.provider" name="idstore.oid"> <property name="subscriber.name" value="dc=cn,dc=oracle,dc=com"></property> <property name="idstore.type" value="OID"></property> <property name="security.principal.key" value="ldap.credential"></property> <property name="security.principal.alias" value="JPS"></property> <property name="ldap.url" value="ldap://yekki.cn.oracle.com:3060"></property> <extendedProperty> <name>user.search.bases</name> <values> <value>cn=users,dc=cn,dc=oracle,dc=com</value> </values> </extendedProperty> <extendedProperty> <name>group.search.bases</name> <values> <value>cn=groups,dc=cn,dc=oracle,dc=com</value> </values> </extendedProperty> <property name="username.attr" value="uid"></property> <property name="user.login.attr" value="uid"></property> <property name="groupname.attr" value="cn"></property> </serviceInstance> b. Ensure that the <jpsContext> entry in the jps-config.xml file refers to the new serviceInstance, that is, idstore.oid and not idstore.ldap: <jpsContext name="default"> <serviceInstanceRef ref="idstore.oid"/> 3. Run the new script to setup the credentials for idstore.oid in the credential store: cd CONTENT_SERVER_HOME/custom/FusionLibraries/tools -bash-3.2$ ./run_credtool.sh Buildfile: ./../tools/credtool.xml     [input] skipping input as property action has already been set.     [input] Alias: [JPS]     [input] Key: [ldap.credential]     [input] User Name: cn=orcladmin     [input] Password: welcome1     [input] JPS Config: [/opt/oracle/ucm/server/custom/FusionLibraries/tools/../../../config/jps-config.xml] manage-creds:      [echo] @@@ Help: run 'ant manage-creds' command to see the detailed usage      [java] Using default context in /opt/oracle/ucm/server/custom/FusionLibraries/tools/../../../config/jps-config.xml file for credential store.      [java] Credential store location : /opt/oracle/ucm/server/config      [java] Credential with map JPS key ldap.credential stored successfully!      [java]      [java]      [java]     Credential for map JPS and key ldap.credential is:      [java]             PasswordCredential name : cn=orcladmin      [java]             PasswordCredential password : welcome1 BUILD SUCCESSFUL Total time: 1 minute 27 seconds Testing 1. acces http://yekki.cn.oracle.com:7777/idc 2. login in with OID user, for example: orcladmin/welcome1 3. make sure your JpsUserProvider status is "good"

    Read the article

< Previous Page | 145 146 147 148 149 150 151 152 153 154 155 156  | Next Page >