Search Results

Search found 27691 results on 1108 pages for 'multi select'.

Page 521/1108 | < Previous Page | 517 518 519 520 521 522 523 524 525 526 527 528  | Next Page >

  • What to do as a new team lead on a project with maintainability problems?

    - by Mr_E
    I have just been put in charge of a code project with maintainability problems. What things can I do to get the project on a stable footing? I find myself in a place where we are working with a very large multi-tiered .NET system that is missing a lot of the important things such as unit tests, IOC, MEF, too many static classes, pure datasets etc. I'm only 24 but I've been here for almost three years (this app has been in development for 5) and mostly due to time constraints we've been just adding in more crap to fit the other crap. After doing a number of projects in my free time I have begun to understand just how important all those concepts are. Also due to employee shifting I find myself to now be the team lead on this project and I really want to come up with some smart ways to improve this app. Ways where the value can be explained to management. I have ideas of what I would like to do but they all seem so overwhelming without much upfront gain. Any stories of how people have or would have dealt with this would be a very interesting read. Thanks.

    Read the article

  • Using Private Extension Galleries in Visual Studio 2012

    - by Jakob Ehn
    Note: The installer and the complete source code is available over at CodePlex at the following location: http://inmetavsgallery.codeplex.com   Extensions and addins are everywhere in the Visual Studio ALM ecosystem! Microsoft releases new cool features in the form of extensions and the list of 3rd party extensions that plug into Visual Studio just keeps growing. One of the nice things about the VSIX extensions is how they are deployed. Microsoft hosts a public Visual Studio Gallery where you can upload extensions and make them available to the rest of the community. Visual Studio checks for updates to the installed extensions when you start Visual Studio, and installing/updating the extensions is fast since it is only a matter of extracting the files within the VSIX package to the local extension folder. But for custom, enterprise-specific extensions, you don’t want to publish them online to the whole world, but you still want an easy way to distribute them to your developers and partners. This is where Private Extension Galleries come into play. In Visual Studio 2012, it is now possible to add custom extensions galleries that can point to any URL, as long as that URL returns the expected content of course (see below).Registering a new gallery in Visual Studio is easy, but there is very little documentation on how to actually host the gallery. Visual Studio galleries uses Atom Feed XML as the protocol for delivering new and updated versions of the extensions. This MSDN page describes how to create a static XML file that returns the information about your extensions. This approach works, but require manual updates of that file every time you want to deploy an update of the extension. Wouldn’t it be nice with a web service that takes care of this for you, that just lets you drop a new version of your VSIX file and have it automatically detect the new version and produce the correct Atom Feed XML? Well search no more, this is exactly what the Inmeta Visual Studio Gallery Service does for you :-) Here you can see that in addition to the standard Online galleries there is an Inmeta Gallery that contains two extensions (our WIX templates and our custom TFS Checkin Policies). These can be installed/updated i the same way as extensions from the public Visual Studio Gallery. Installing the Service Download the installler (Inmeta.VSGalleryService.Install.msi) for the service and run it. The installation is straight forward, just select web site, application pool and (optional) a virtual directory where you want to install the service.   Note: If you want to run it in the web site root, just leave the application name blank Press Next and finish the installer. Open web.config in a text editor and locate the the <applicationSettings> element Edit the following setting values: FeedTitle This is the name that is shown if you browse to the service using a browser. Not used by Visual Studio BaseURI When Visual Studio downloads the extension, it will be given this URI + the name of the extension that you selected. This value should be on the following format: http://SERVER/[VDIR]/gallery/extension/ VSIXAbsolutePath This is the path where you will deploy your extensions. This can be a local folder or a remote share. You just need to make sure that the application pool identity account has read permissions in this folder Save web.config to finish the installation Open a browser and enter the URL to the service. It should show an empty Feed page:   Adding the Private Gallery in Visual Studio 2012 Now you need to add the gallery in Visual Studio. This is very easy and is done as follows: Go to Tools –> Options and select Environment –> Extensions and Updates Press Add to add a new gallery Enter a descriptive name, and add the URL that points to the web site/virtual directory where you installed the service in the previous step   Press OK to save the settings. Deploying an Extension This one is easy: Just drop the file in the designated folder! :-)  If it is a new version of an existing extension, the developers will be notified in the same way as for extensions from the public Visual Studio gallery: I hope that you will find this sever useful, please contact me if you have questions or suggestions for improvements!

    Read the article

  • Impact of Server Failure on Coherence Request Processing

    - by jpurdy
    Requests against a given cache server may be temporarily blocked for several seconds following the failure of other cluster members. This may cause issues for applications that can not tolerate multi-second response times even during failover processing (ignoring for the moment that in practice there are a variety of issues that make such absolute guarantees challenging even when there are no server failures). In general, Coherence is designed around the principle that failures in one member should not affect the rest of the cluster if at all possible. However, it's obvious that if that failed member was managing a piece of state that another member depends on, the second member will need to wait until a new member assumes responsibility for managing that state. This transfer of responsibility is (as of Coherence 3.7) performed by the primary service thread for each cache service. The finest possible granularity for transferring responsibility is a single partition. So the question becomes how to minimize the time spent processing each partition. Here are some optimizations that may reduce this period: Reduce the size of each partition (by increasing the partition count) Increase the number of JVMs across the cluster (increasing the total number of primary service threads) Increase the number of CPUs across the cluster (making sure that each JVM has a CPU core when needed) Re-evaluate the set of configured indexes (as these will need to be rebuilt when a partition moves) Make sure that the backing map is as fast as possible (in most cases this means running on-heap) Make sure that the cluster is running on hardware with fast CPU cores (since the partition processing is single-threaded) As always, proper testing is required to make sure that configuration changes have the desired effect (and also to quantify that effect).

    Read the article

  • How to verify the code that could take a substantial time to compile? [on hold]

    - by user18404
    As a follow up to my prev question: What is the best aproach for coding in a slow compilation environment To recap: I am stuck with a large software system with which a TDD ideology of "test often" does not work. And to make it even worse the features like pre-compiled headers/multi-threaded compilation/incremental linking, etc is not available to me - hence I think that the best way out would be to add the extensive logging into the system and to start "coding in large chunks", which I understand as code for a two-three hours first (as opposed to 15-20 mins in TDD) - thoroughly eyeball the code for a 15 minutes and only after all that do the compilation and run the tests. As I have been doing TDD for a quite a while, my code eyeballing / code verification skills got rusty (you don't really need this that much if you can quickly verify what you've done in 5 seconds by running a test or two) - so I am after a recommendations on how to learn these source code verification/error spotting skills again. I know I was able to do that easily some 5-10 years ago when I din't have much support from the compiler/unit testing tools I had until recently, thus there should be a way to get back to the basics.

    Read the article

  • how do I uninstall old kernel options listed in Grub2? [closed]

    - by user12809
    Possible Duplicate: Is there a way to remove/hide old kernel versions? I installed Ubuntu Tweak in Ubuntu 11.10, went to Janitor, and selected and removed old kernels that appeared there (3.0.0-12). Now, the only installed linux-image that appears as 'Installed' in SPM is the most recent one (3.0.0-13), which is the one I want. It did not however eliminate the kernel listing in Grub 2. At boot: However, at boot, in Grub-2, the following options still appear: 3.0.0-13-generic 3.0.0-13-generic (recovery mode) 3.0.0-12 (generic) (on /dev/sde5) 3.0.0-12 (generic (recovery mode) (on /dev/sde5) And, in Terminal, when I change directory (cd) to /boot, and then list (ls), I get the following listed kernels: 3.0.0-13 2.6.38-12 2.6.38-8 (al There is no change when I sudo update-grub in Terminal 1) what is /dev/sde5, and where is it located in the file system, so i can delete it? 2) why the differences between what appears as installed in SPM, what appears at boot in Grub2, and what shows when I list the contents of Grub2 in Terminal? Ultimately, I simply want to remove the 3.0.0-12 kernel options at boot in Grub2. How do I best and simplest do that? Thanks again donofrij is online now Report Post Edit/Delete Message Reply With Quote Multi-Quote This Message Quick reply to this message

    Read the article

  • Why I lose certain values from array?

    - by blankon91
    I've the code to input the values dynamically, when I use it to add the values for the first time, it's fine, but when I want to edit it, the old values didn't inserted on sql query but the new values inserted Here is the example: http://i.stack.imgur.com/dwugS.jpg here is the code: ============the function================= sub ShowItemfgEdit(query,selItemName,defValue,num,cdisable) response.write "<select " & cdisable & " num=""" & num & """ id=""itemCombo"" name=""" & selItemName & """ class=""label"" onchange=""varUsage.ChangeSatuanDt(this)"">" if NOT query.BOF then query.moveFirst WHILE NOT query.EOF tulis = "" if trim(defValue) = trim(query("ckdbarang")) then tulis = "selected" end if response.write "<option value=""" & trim(query("ckdbarang")) & """" & tulis & ">" & trim(query("ckdbarang")) & " - " & trim(query("vnamabarang")) query.moveNext WEND end if response.write "</select>" end sub ============calling the function================ <td class="rb" align="left"><% call ShowItemfgEdit(qGetItemfgGrp,"fitem",qGetUsageDt("ckdfg"),countLine,readonlyfg) %></td> ==============post the value====================== <input type="hidden" name="fitem" value=""> ================get the value=================== for i = 1 to request.form("hdnOrderNum") if request.form("selOrdItem_" & i) <> "" then 'bla...blaa...blaa... ckdfg = trim(request.form("fitem_" & i)) '<==here is the problem objCommand.commandText = "INSERT INTO IcTrPakaiDt " &_ "(id, id_h, ckdunitkey, cnopakai, dtglpakai, ckdbarang, ckdgudang, nqty1, nqty2, csatuan1, csatuan2, nqtypakai, csatuanpakai, vketerangan, cJnsPakai, ckdprodkey, ckdfg, ncountstart, ncountstop, ncounttotal) " &_ " VALUES " &_ " (" & idDt & ",'" & idHd & "','" & selLoc & "','" & nopakai & "','" & cDate(request.form("hdnUsageDate")) & "','" & trim(ckdbarang) & "','" & trim(ckdgudang) & "'," & nqty1 & "," & nqty2 & ",'" & trim(csatuan1) & "','" & trim(csatuan2) & "'," & nqtypakai & ",'" & csatuanpakai & "','" & trim(keteranganItem) & "','" & trim(cjnspakai) & "','" & ckdprodkey & "','" &ckdfg& "'," & cnt1 & "," & cnt2 & "," & totalcnt & ")" set qInsertPakaiDt = objCommand.Execute end if next problem: old value of ckdfg didn't inserted to query, but the new value inserted. How to fix this bug?

    Read the article

  • New whitepaper: Evolution from the Traditional Data Center to Exalogic: An Operational Perspective

    - by Javier Puerta
    IT organizations are struggling with the need to balance the day-to-day concerns of data center management against the business level requirements to deliver long-term value. This balancing act has proven difficult and inefficient: systems and application management tools are resource intensive and traditional infrastructure management architectures have developed over time on a project by project basis. These traditional management systems consist of multiple tools that require administrators to waste time performing too many steps to handle routine administrative tasks. Operational efficiency and agility in your enterprise are directly linked to the capabilities provided by the management layer across the entire stack, from the application, middleware, operating system, compute, network and storage. Only when this end to end capability is provided will we experience the full benefit of a scalable, efficient, responsive and secure datacenter. Managing Exalogic is substantially less complex and error prone than managing traditional systems built from individually sourced, multi-vendor components because Exalogic is designed to be administered and maintained as a single, integrated system (Figure 1). It is at the forefront of the industry-wide shift away from costly and inferior one-off platforms toward private clouds and Engineered Systems. Read the full whitepaper "Evolution from the Traditional Data Center to Exalogic: An Operational Perspective". Full document is available for download at the Exadata Partner Community Collaborative Workspace (for community members only - if you get an error message, please register for the Community first).

    Read the article

  • New Whitepaper: Evolution from the Traditional Data Center to Exalogic: An Operational Perspective

    - by Javier Puerta
    IT organizations are struggling with the need to balance the day-to-day concerns of data center management against the business level requirements to deliver long-term value. This balancing act has proven difficult and inefficient: systems and application management tools are resource intensive and traditional infrastructure management architectures have developed over time on a project by project basis. These traditional management systems consist of multiple tools that require administrators to waste time performing too many steps to handle routine administrative tasks. Operational efficiency and agility in your enterprise are directly linked to the capabilities provided by the management layer across the entire stack, from the application, middleware, operating system, compute, network and storage. Only when this end to end capability is provided will we experience the full benefit of a scalable, efficient, responsive and secure datacenter. Managing Exalogic is substantially less complex and error prone than managing traditional systems built from individually sourced, multi-vendor components because Exalogic is designed to be administered and maintained as a single, integrated system (Figure 1). It is at the forefront of the industry-wide shift away from costly and inferior one-off platforms toward private clouds and Engineered Systems. Read the full whitepaper "Evolution from the Traditional Data Center to Exalogic: An Operational Perspective". Full document is available for download at the Exadata Partner Community Collaborative Workspace (for community members only - if you get an error message, please register for the Community first).

    Read the article

  • Reverse-Engineer Driver for Backlit Keyboard

    - by user87847
    Here's my situation: I recently purchased a Sager NP9170 (same as the Clevo P170EM) and it has a multi-colored, backlit keyboard. Under Windows 7, you can launch an app that allows you to change the color of the backlighting to any of a handful of colors (blue, green, red, etc). I want that same functionality under Linux. I haven't been able to find any software that does this, so I guess I'm going to have to write it myself. I'm a programmer by trade, but I've haven't done much low level programming, and I've certainly never written a device driver, so I was wondering if anyone could answer these two questions: 1) Is there any software already out there that does this sort of thing? I've looked fairly thoroughly but haven't found anything applicable. 2) Where would I start in trying to reverse engineer this sort of thing? Any useful articles, tutorials, books that might help? And just to clarify: The backlighting already works, that's not the problem. I just want to be able to change the color of the backlighting. This functionality is supported by the hardware. The laptop came with windows software that does this and I want the same functionality in Linux. I am willing to write this software myself, I just want to know the best way to go about it. Thanks!

    Read the article

  • JQueryMobile - Problems with dialog boxes [closed]

    - by Richard van Hees
    I'm programming in JQueryMobile, but I can't seem to get some things as I want. First it's good to tell I am mostly programming in a multi-page template. I have a login function in the web based app. The idea is that the user sees he's not logged in and the user can click on the button to log in. A dialog box pops up, in which the user can enter his credentials. This dialog box is in front of the previous page, in my case just index.php. The page for profile is at profile.php#profile. In this case the url for the dialog box is index.php#profile&ui-state=dialog. Don't ask me why, that's how JQueryMobile works, I guess. Anyway, after the user clicks on 'Login' in the pop-up, I want a new dialog to pop-up in which it says you are logged in and I want the content of the page behind it (index.php#profile) to refresh. Of course I want this all to move very smooth and no refreshing of the whole page, to prevent loading time and thus a blank screen for a second. In short: User not logged in Clicks on login Dialog pops up with form Clicks login New dialog pops up with 'success' (or whatever) in the same style as the previous dialog Clicks ok Page behind the dialogues has been refreshed without user noticing Also another thing that doesn't really work out for me: I can't seem to get a dialog to pop up, triggered by an action in another dialog. It just appears as a normal page.

    Read the article

  • True column-mode (block-selection and editing) text editor solution?

    - by tamale
    In windows, I used to use a text editor called crimson editor which featured the best column-mode editing support I have yet to use. When enabled via a simple Alt-C shortcut, selections could be made with the mouse or cursor keys and they would be visual blocks rather than wrapped-lines. These selections could be deleted, moved, copied, pasted, and all of the operations just made sense. You could also just start typing, and you'd get a column of the characters as you're typing. There are multiple ways of getting parts of the these features working separately discussed on this forum thread, but no one has yet to provide a solution that provides this all-encompassing and easy-to-use method. If someone could point me to a gedit plugin where this work is actively being pursued, perhaps I could help with the coding myself. If someone is aware of a text editor that already provides this full functionality, I'd appreciate the info. Running crimson editor through wine and the close-but-not-quite multi-edit plugin for gedit are the temporary solutions I'm 'getting by with' for the time being.

    Read the article

  • Is full partition encryption the only sure way to make Ubuntu safe from external access?

    - by fred.bear
    (By "external access", I mean eg. via a Live CD, or another OS on the same dual-boot machine) A friend wants to try Ubuntu. He's fed up with Vista grinding to a crawl (the kids? :), so he likes the "potential" security offered by Ubuntu, but because the computer will be multi-booting Ubuntu (primary) and 2 Vistas (one for him, if he ever needs it again, and the other one for the kids to screw up (again). However, he is concerned about any non-Ubuntu access to the Ubuntu partitions (and also to his Vista partition)... I believe TrueCrypt will do the job for his Vista, but I'd like to know what the best encryption system for Ubuntu is... If TrueCrypt works for Ubuntu, it may be the best option for him, as it would be the same look and feel for both. Ubuntu will be installed with 3 partitions; 1) root 2) home 3) swap.. Will Ubuntu's boot loader clash with TrueCrypt's encrypted partition? PS.. Is encryption a suitable solution?

    Read the article

  • Partner Webcast - Oracle WebCenter: Portal Highlights - 31 Oct 2013

    - by Roxana Babiciu
    Oracle WebCenter is the center of engagement for business. In order to succeed in today’s economy, organizations need to engage with information across all channels to ensure customers, partners and employees have access to the right information in the context of the business process in which they are engaged. The latest release of Oracle WebCenter addresses this challenge with updates across its complete portfolio. Nowadays, Portals are multi-channel applications that enable the creation, sharing and distribution of personalized content, as well as access to social networking and self-service capabilities. Web 2.0 and social technologies have already transformed the ways customers, employees, partners, and suppliers communicate and stay informed. The new release of Oracle WebCenter Portal makes it easier and faster for business users to create intuitive portals with integrated application content Streamlining development with an integrated set of tools for web and mobile. Providing out-of-the box templates for common use cases. Expediting the portal creation experience with new development tools empower business users to build and deploy mobile portals and websites with unprecedented speed—without having to wait for IT which leads to a shorter time to market and reduced costs. Join us to discover a Web platform that allows organizations to quickly and easily create intranets, extranets, composite applications, and self-service portals, providing users a more secure and efficient way of consuming information and interacting with applications, processes, and other users – the latest Oracle WebCenter Portal release 11gR1 PS7. Read more here

    Read the article

  • Simple (and fast) dices physics

    - by Markus von Broady
    I'm programming a throw of 5 dices in Actionscript 3 + AwayPhysics (BulletPhysics port). I had a lot of fun tweaking frictions, masses etc. and in the end I found best results with more physics ticks per frame. Currently I use 10 ticks per frame (1/60 s) and it's OK, though I see a difference in plus for 20 ticks. Even though it's only 5 cubes (dices) in a box (or a floor with 3 walls really) I can't simulate 20 ticks in a frame and keep FPS at 60 on a medium-aged PC. That's why I decided to precompute frames for animation, finishing it in around 1700 ticks in 2 seconds. The flash player is freezed for these 2 seconds, and I'm afraid that this result will be more of a 5 seconds or even more, if I'll simulate multi-threading and compute frames in background of some other heavy processes and CPU drawing (dices is only a part of this game). Because I want both players to see dices roll in same way, I can't compute physics when having free resources, and build a buffer for at least one throw of each type (where type is number of dices thrown). I'm afraid players will see a "preparing dices........." message too often and for too long. I think the only solution to this problem is replacing PhysicsEngine with something simpler, or creating own physicsEngine. Do You have any formulas for cube-cube and cube-wall collision detection, and for calculating how their angular and linear velocities should change after a collision occurs?

    Read the article

  • Appropriate design / technologies to handle dynamic string formatting?

    - by Mark W
    recently I was tasked with implementing a way of adding support for versioning of hardware packet specifications to one of our libraries. First a bit of information about the project. We have a hardware library which has classes for each of the various commands we support sending to our hardware. These hardware modules are essentially just lights with a few buttons, and a 2 or 4 digit display. The packets typically follow the format {SOH}AADD{ETX}, where AA is our sentinel action code, and DD is the device ID. These packet specs are different from one command to the next obviously, and the different firmware versions we have support different specifications. For example, on version 1 an action code of 14 may have a spec of {SOH}AADDTEXT{ETX} which would be AA = 14 literal, DD = device ID, TEXT = literal text to display on the device. Then we come out with a revision with adds an extended byte(s) onto the end of the packet like this {SOH}AADDTEXTE{ETX}. Assume the TEXT field is fixed width for this example. We have now added a new field onto the end which could be used to say specify the color or flash rate of the text/buttons. Currently this java library only supports one version of the commands, the latest. In our hardware library we would have a class for this command, say a DisplayTextArgs.java. That class would have fields for the device ID, the text, and the extended byte. The command class would expose a method which generates the string ("{SOH}AADDTEXTE{ETX}") using the value from the class. In practice we would create the Args class as needed, populate the fields, call the method to get our packet string, then ship that down across the CAN. Some of our other commands specification can vary for the same command, on the same version, depending on some runtime state. For example, another command for version 1 may be {SOH}AA{ETX}, where this action code clears all of the modules behind a specific controller device of their text. We may overload this packet to have option fields with multiple meanings like {SOH}AAOC{ETX} where OC is literal text, which tells the controller to only clear text on a specific module type, and to leave the others alone, or the spec could also have an option format of {SOH}AADD{ETX} to clear the text off a a specific device. Currently, in the method which generates the packet string, we would evaluate fields on the args class to determine which spec we will be using when formatting the packet. For this example, it would be along the lines of: if m_DeviceID != null then use {SOH}AADD{ETX} else if m_ClearOCs == true then use {SOH}AAOC{EXT} else use {SOH}AA{ETX} I had considered using XML, or a database to store String.format format strings, which were linked to firmware version numbers in some table. We would load them up at startup, and pass in the version number of the hardwares firmware we are currently using (I can query the devices for their firmware version, but the version is not included in all packets as part of the spec). This breaks down pretty quickly because of the dynamic nature of how we select which version of the command to use. I then considered using a rule engine to possibly build out expressions which could be interpreted at runtume, to evaluate the args class's state, and from that select the appropriate format string to use, but my brief look at rule engines for java scared me away with its complexity. While it seems like it might be a viable solution, it seems overly complex. So this is why I am here. I wouldn't say design is my strongest skill, and im having trouble figuring out the best way to approach this problem. I probably wont be able to radically change the args classes, but if the trade off was good enough, I may be able to convince my boss that the change is appropriate. What I would like from the community is some feedback on some best practices / design methodologies / API or other resources which I could use to accomplish: Logic to determine which set of commands to use for a given firmware version Of those command, which version of each command to use (based on the args classes state) Keep the rules logic decoupled from the application so as to avoid needing releases for every firmware version Be simple enough so I don't need weeks of study and trial and error to implement effectively.

    Read the article

  • A Myriad of Options

    - by Mark Hesse
    I am currently working with a customer that is close to outgrowing their Exadata X2-2 half rack in both compute and storage capacity.  The platform is used for one of their larger data warehouse applications and the move to Exadata almost two years ago has been a resounding success, forcing them to grow the platform sooner than anticipated. At a recent planning meeting, we started looking at the options for expansion and have developed five alternatives, all of which meet or exceed their growth requirements, yet have different pros and cons in terms of the impact to their production and test environments. The options include an in-rack upgrade to a full rack of Exadata using the recently released X3-2 platform (an option that even applies to an older V2 rack), multi-rack cabling the existing X2-2 to another full rack or half rack X2-2 (and utilizing both compute and storage capacity in the other rack), or simply adding a new X3-2 half rack (and taking advantage of the added compute and flash performance in the X3-2). While the decision is yet to be made, it had me thinking that one of the benefits of Exadata over a traditional database deployment is that when the time comes to expand the platform, there are a myriad of options.

    Read the article

  • How can I better manage far-reaching changes in my code?

    - by neuviemeporte
    In my work (writing scientific software in C++), I often get asked by the people who use the software to get their work done to add some functionality or change the way things are done and organized right now. Most of the time this is just a matter of adding a new class or a function and applying some glue to do the job, but from time to time, a seemingly simple change turns out to have far-reaching consequences that require me to redesign a substantial amount of existing code, which takes a lot of time and effort, and is difficult to evaluate in terms of time required. I don't think it has as much to do with inter-dependence of modules, as with changing requirements (admittedly, on a smaller scale). To provide an example, I was thinking about the recently-added multi-user functionality in Android. I don't know whether they planned to introduce it from the very beginning, but assuming they didn't, it seems hard to predict all the areas that will be affected by the change (apps preferences, themes, need to store account info somehow, etc...?), even though the concept seems simple enough, and the code is well-organized. How do you deal with such situations? Do you just jump in to code and then sort out the cruft later like I do? Or do you do a detailed analysis beforehand of what will be affected, what needs to be updated and how, and what has to be rewritten? If so, what tools (if any) and approaches do you use?

    Read the article

  • 2 year cis degree and in school for computer science what can I do?

    - by chame1eon
    Hi I am 29 and have a recent 2 cis year degree from a community college , an A+ certification , and meager experience with web stuff ( Java , Javascript , php ) while in my 1 year help desk internship. In all the programming classes I was able to blow through the homework easily even while other students were panicking and dropping. I think I have managed to avoid the most atrocious noob/self taught mistakes ( spaghetti code etc) by just doing research before starting something and trying to keep good design in mind. Even so I'd have to make heavy use of references to crawl through even simple projects that would result in fully finished useful applications. I need a job now and I am tired of the slow pace of the classes and would love to get any kind of practical experience I could. The problem is that I am not sure what I should be trying to do. I have a very strong preference for application programming or at least anything light on design and preferably pretty low level. If I can't do that then anything technology related , for example help desk would be better than nothing. I live near Raleigh NC. Am I qualified for anything that could contribute to coding (C++ or Java ) experience or even web development though I don't really like it. Would web development experience help. If not is there anything I could read or do that could help? Is the help desk my only choice? If it is, are there any relatively quick certifications or anything similar that would help while I am waiting? Sorry about the long multi-part question. Thanks for reading.

    Read the article

  • Rectangular Raycasting?

    - by igrad
    If you've ever played The Swapper, you'll have a good idea of what I'm asking about. I need to check for, and isolate, areas of a rectangle that may intersect with either a circle or another rectangle. These selected areas will receive special properties, and the areas will be non-static, since the intersecting shapes themselves will also be dynamic. My first thought was to use raycasting detection, though I've only seen that in use with circles, or even ellipses. I'm curious if there's a method of using raycasting with a more rectangular approach, or if there's a totally different method already in use to accomplish this task. I would like something more exact than checking in large chunks, and since I'm using SDL2 with a logical renderer size of 1920x1080, checking if each pixel is intersecting is out of the question, as it would slow things down past a playable speed. I already have a multi-shape collision function-template in place, and I could use that, though it only checks if sides or corners are intersecting; it does not compute the overlapping area, or even find the circle's secant line, though I can't imagine it would be overly complex to implement. TL;DR: I need to find and isolate areas of a rectangle that may intersect with a circle or another rectangle without checking every single pixel on-screen.

    Read the article

  • 11gR2 RAC ASM????

    - by Liu Maclean(???)
    11gR2 RAC?ocr?votedisk???????ASM??, ????10g??????2?RAC????????????,  ?? 11gR2 ?ASM?spfile??????ASM diskgroup???????ASM??????? ????????????,????? ASM?????mount diskgroup??????diskgroup????, ??ASM??????ASM spfile????????,?2???????? ????T.askmaclean.com?????ASM?????: hello maclean, ??spfile??ASMCMD> spget+CRSDG/rac/asmparameterfile/registry.253.787925627?????,ASM ?????ORACLE instance,?????????????diskgroup,????????????????????????????thanks.! ?????????: ?11.2??Oracle Cluterware??voting disk files?????????11.1?10.2????,11.2??voting disk file??????OCR?, ?????11.2??ocr?votedisk?????ASM? , ???11.2?voting disk file??GPNP profile??CSS voting file discovery string???? CSS voting disk file?discovery string???ASM,??????ASM discovery string???  ????????udev???????ASM???LUN, ??udev????????/dev/rasm-disk* , ????gpnptool get????gpnp profile: [grid@maclean1 trace]$ gpnptool get Warning: some command line parameters were defaulted. Resulting command line: /g01/grid/app/11.2.0/grid/bin/gpnptool.bin get -o- <?xml version="1.0" encoding="UTF-8"?><gpnp:GPnP-Profile Version="1.0" xmlns="http://www.grid-pnp.org/2005/11/gpnp-profile" xmlns:gpnp="http://www.grid-pnp.org/2005/11/gpnp-profile" xmlns:orcl="http://www.oracle.com/gpnp/2005/11/gpnp-profile" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.grid-pnp.org/2005/11/gpnp-profile gpnp-profile.xsd" ProfileSequence="9" ClusterUId="452185be9cd14ff4ffdc7688ec5439bf" ClusterName="maclean-cluster" PALocation=""><gpnp:Network-Profile><gpnp:HostNetwork id="gen" HostName="*"><gpnp:Network id="net1" IP="192.168.1.0" Adapter="eth0" Use="public"/><gpnp:Network id="net2" IP="172.168.1.0" Adapter="eth1" Use="cluster_interconnect"/></gpnp:HostNetwork></gpnp:Network-Profile>< orcl:CSS-Profile id="css" DiscoveryString="+asm" LeaseDuration="400"/><orcl:ASM-Profile id="asm" DiscoveryString="/dev/rasm*" SPFile="+SYSTEMDG/maclean-cluster/asmparameterfile/registry.253.788682933"/>< ds:Signature xmlns:ds="http://www.w3.org/2000/09/xmldsig#"><ds:SignedInfo><ds:CanonicalizationMethod Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/><ds:SignatureMethod Algorithm="http://www.w3.org/2000/09/xmldsig#rsa-sha1"/><ds:Reference URI=""><ds:Transforms><ds:Transform Algorithm="http://www.w3.org/2000/09/xmldsig#enveloped-signature"/><ds:Transform Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"> <InclusiveNamespaces xmlns="http://www.w3.org/2001/10/xml-exc-c14n#" PrefixList="gpnp orcl xsi"/></ds:Transform></ds:Transforms>< ds:DigestMethod Algorithm="http://www.w3.org/2000/09/xmldsig#sha1"/><ds:DigestValue>L1SLg10AqGEauCQ4ne9quucITZA=</ds:DigestValue>< /ds:Reference></ds:SignedInfo><ds:SignatureValue>rTyZm9vfcQCMuian6isnAThUmsV4xPoK2fteMc1l0GIvRvHncMwLQzPM/QrXCGGTCEvgvXzUPEKzmdX2oy5vLcztN60UHr6AJtA2JYYodmrsFwEyVBQ1D6wH+HQiOe2SG9UzdQnNtWSbjD4jfZkeQWyMPfWdKm071Ek0Rfb4nxE=</ds:SignatureValue></ds:Signature></gpnp:GPnP-Profile> Success. ?????2???: <orcl:CSS-Profile id=”css” DiscoveryString=”+asm” LeaseDuration=”400?/>==»css voting disk??+ASM<orcl:ASM-Profile id=”asm” DiscoveryString=”/dev/rasm*” SPFile=”+SYSTEMDG/maclean-cluster/asmparameterfile/registry.253.788682933?/>==»??????ASM?DiscoveryString=”/dev/rasm*”,?ASM??????????????,SPFILE???ASM Parameter FILE?ALIAS ???????GPNP???ASM Parameter FILE?ALIAS,?????ASM???????SPFILE,???Diskgroup?Mount???????ASM ALIAS?????? ??????+SYSTEMDG/maclean-cluster/asmparameterfile/registry.253.788682933??SPFILE?ASM??????: [grid@maclean1 wallets]$ sqlplus / as sysasm SQL*Plus: Release 11.2.0.3.0 Production on Tue Jul 17 05:45:35 2012 Copyright (c) 1982, 2011, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production With the Real Application Clusters and Automatic Storage Management options SQL> set linesize 140 pagesize 1400 col "FILE NAME" format a40 set head on select NAME "FILE NAME", AU_KFFXP "AU NUMBER", NUMBER_KFFXP "FILE NUMBER", DISK_KFFXP "DISK NUMBER" from x$kffxp, v$asm_alias where GROUP_KFFXP = GROUP_NUMBER and NUMBER_KFFXP = FILE_NUMBER and name in ('REGISTRY.253.788682933') order by DISK_KFFXP,AU_KFFXP; FILE NAME AU NUMBER FILE NUMBER DISK NUMBER ---------------------------------------- ---------- ----------- ----------- REGISTRY.253.788682933 39 253 1 REGISTRY.253.788682933 35 253 3 REGISTRY.253.788682933 35 253 4 SQL> col path for a50 SQL> select disk_number,path from v$asm_disk where disk_number in (1,3,4) and GROUP_NUMBER=3; DISK_NUMBER PATH ----------- -------------------------------------------------- 3 /dev/rasm-diske 4 /dev/rasm-diskf 1 /dev/rasm-diskc ?????ASM SPFILE??????(redundancy=high),????? /dev/rasm-diskc?AU=39?/dev/rasm-diske AU=35?/dev/rasm-diskf AU=35? ????kfed?????????ASM DISK?header: [grid@maclean1 wallets]$ kfed read /dev/rasm-diske|grep spfile kfdhdb.spfile: 35 ; 0x0f4: 0x00000023 [grid@maclean1 wallets]$ kfed read /dev/rasm-diskc|grep spfile kfdhdb.spfile: 39 ; 0x0f4: 0x00000027 [grid@maclean1 wallets]$ kfed read /dev/rasm-diskf|grep spfile kfdhdb.spfile: 35 ; 0x0f4: 0x00000023 ????ASM disk header?kfdhdb.spfile??ASM SPFILE???DISK??AU NUMBER????, ASM???????????GPNP PROFILE?? DiscoveryString?????????,????ASM disk header?????kfdhdb.spfile??????,?????MOUNT DISKGROUP??????ASM SPFILE,?????ASM, ?????????????????

    Read the article

  • networking without port forwarding

    - by Wallacoloo
    I'm trying to add networking functionality to my game. I want any user to be able to host the game, and anyone to be able to connect as a client. The client sends info to the host about their player's position, etc. When the host receives a message, it validates it and then broadcasts it to its other clients. I will primarily be dealing with UDP, but will also need TCP for chat & lobby stuff. The problem is that I can't seem to get a packet sent from the client to the host or the other way around without enabling port forwarding on my router. But I don't think this is necessary. I believe the reason I need port forwarding is because I want to send a packet from 1 computer on a LAN to another computer on a different LAN, but neither of them have a global ip address since they're in a LAN. So really, I can only send packets targeting the other network's router, which must forward it on to the machine I want to reach. So how can I do this without port forwarding? Somehow a web server can communicate with my computer, which doesn't have a global ip, without port forwarding. And I've played plenty of multi-player games that don't require me to enable port forwarding. So it must be possible. Btw, I'm using SDL_Net. I don't think this will change anything though.

    Read the article

  • Oracle ADF Mobile

    - by rituchhibber
    We are happy to announce that Oracle ADF Mobile is now available for our customers.Oracle ADF Mobile enables developer to build applications that install and run on both iOS and Android devices from one source code.Development is done with JDeveloper and ADF and leverages Java and HTML5 technologies, while keeping the same visual and declarative approach ADF is known for.Please Click here to read more about the Oracle ADF Mobile release and learn more on our OTN Page. Feature Highlights: Java - Oracle brings a Java VM embedded with each application so you can develop all your business logic in the platform neutral language you know and love! (Yes, even iOS!) JDBC - Since we give you Java, we also provide JDBC along with a SQLite driver and engine that also supports encryption out of the box. Multi-Platform - Truly develop your application only once and deploy to multiple platforms. iOS and Android platforms are supported for both phone and tablet. Flexible - You can decide how to implement the UI: Use existing server-based UI framework like JSF. Use your own favorite HTML5 framework like JQuery. Use our declarative HTML5 component set provided with the framework. Device Feature Access - You can get access to device features from either Java or JavaScript to invoke features like camera, GPS, email, SMS, contacts, etc. Secure - ADF Mobile provides integrated security that works with your server back-end as well. Whether you’re using remote URLs, local HTML or AMX, you can secure any/all of your features with a single consistent login page. Since we also give you SQLite encryption, we are assured that your data is safe. Rapid - Using the same development techniques that ADF developers are already used to, you can quickly create mobile applications without ever learning another language!ADF Mobile XML or AMX for short, provides all the normal input and layout controls you expect and we also add charts/maps/gauges along with it to provide a very comprehensive UI controls. You can also mix and match any of the three for ultimate flexibility!

    Read the article

  • How should I implement multiple threads in a game? [duplicate]

    - by xerwin
    This question already has an answer here: Multi-threaded games best practices. One thread for 'logic', one for rendering, or more? 6 answers So I recently started learning Java, and having a interest in playing games as well as developing them, naturally I want to create game in Java. I have experience with games in C# and C++ but all of them were single-threaded simple games. But now, I learned how easy it is to make threads in Java, I want to take things to the next level. I started thinking about how would I actually implement threading in a game. I read couple of articles that say the same thing "Usually you have thread for rendering, for updating game logic, for AI, ..." but I haven't (or didn't look hard enough) found example of implementation. My idea how to make implementation is something like this (example for AI) public class AIThread implements Runnable{ private List<AI> ai; private Player player; /*...*/ public void run() { for (int i = 0; i < ai.size(); i++){ ai.get(i).update(player); } Thread.sleep(/* sleep until the next game "tick" */); } } I think this could work. If I also had a rendering and updating thread list of AI in both those threads, since I need to draw the AI and I need to calculate the logic between player and AI(But that could be moved to AIThread, but as an example) . Coming from C++ I'm used to do thing elegantly and efficiently, and this seems like neither of those. So what would be the correct way to handle this? Should I just keep multiple copies of resources in each thread or should I have the resources on one spot, declared with synchronized keyword? I'm afraid that could cause deadlocks, but I'm not yet qualified enough to know when a code will produce deadlock.

    Read the article

  • ASP.NET TextBox TextChanged event not firing in custom EditorPart

    - by Ben Collins
    This is a classic sort of question, I suppose, but it seems that most people are interested in having the textbox cause a postback. I'm not. I just want the event to fire when a postback occurs. I have created a webpart with a custom editorpart. The editorpart renders with a textbox and a button. Clicking the button causes a dialog to open. When the dialog is closed, it sets the value of the textbox via javascript and then does __doPostBack using the ClientID of the editorpart. The postback happens, but the TextChanged event never fires, and I'm not sure if it's a problem with the way __doPostBack is invoked, or if it's because of the way I'm setting up the event handler, or something else. Here's what I think is the relevant portion of the code from the editorpart: protected override void CreateChildControls() { _txtListUrl = new TextBox(); _txtListUrl.ID = "targetSPList"; _txtListUrl.Style.Add(HtmlTextWriterStyle.Width, "60%"); _txtListUrl.ToolTip = "Select List"; _txtListUrl.CssClass = "ms-input"; _txtListUrl.Attributes.Add("readOnly", "true"); _txtListUrl.Attributes.Add("onChange", "__doPostBack('" + this.ClientID + "', '');"); _txtListUrl.Text = this.ListString; _btnListPicker = new HtmlInputButton(); _btnListPicker.Style.Add(HtmlTextWriterStyle.Width, "60%"); _btnListPicker.Attributes.Add("Title", "Select List"); _btnListPicker.ID = "browseListsSmtButton"; _btnListPicker.Attributes.Add("onClick", "mso_launchListSmtPicker()"); _btnListPicker.Value = "Select List"; this.AddConfigurationOption("News List", "Choose the list that serves as the data source.", new Control[] { _txtListUrl, _btnListPicker }); if (this.ShowViewSelection) { _txtListUrl.TextChanged += new EventHandler(_txtListUrl_TextChanged); _ddlViews = new DropDownList(); _ddlViews.ID = "_ddlViews"; this.AddConfigurationOption("View", _ddlViews); } } protected override void OnPreRender(EventArgs e) { ScriptLink.Register(this.Page, "PickerTreeDialog.js", true); string lastSelectedListId = string.Empty; if (!this.WebId.Equals(Guid.Empty) && !this.ListId.Equals(Guid.Empty)) { lastSelectedListId = SPHttpUtility.EcmaScriptStringLiteralEncode( string.Format("SPList:{0}?SPWeb:{1}:", this.ListId.ToString(), this.WebId.ToString())); } string script = "\r\n var lastSelectedListSmtPickerId = '" + lastSelectedListId + "';" + "\r\n function mso_launchListSmtPicker(){" + "\r\n if (!document.getElementById) return;" + "\r\n" + "\r\n var listTextBox = document.getElementById('" + SPHttpUtility.EcmaScriptStringLiteralEncode(_txtListUrl.ClientID) + "');" + "\r\n if (listTextBox == null) return;" + "\r\n" + "\r\n var serverUrl = '" + SPHttpUtility.EcmaScriptStringLiteralEncode(SPContext.Current.Web.ServerRelativeUrl) + "';" + "\r\n" + "\r\n var callback = function(results) {" + "\r\n if (results == null || results[1] == null || results[2] == null) return;" + "\r\n" + "\r\n lastSelectedListSmtPickerId = results[0];" + "\r\n var listUrl = '';" + "\r\n if (listUrl.substring(listUrl.length-1) != '/') listUrl = listUrl + '/';" + "\r\n if (results[1].charAt(0) == '/') results[1] = results[1].substring(1);" + "\r\n listUrl = listUrl + results[1];" + "\r\n if (listUrl.substring(listUrl.length-1) != '/') listUrl = listUrl + '/';" + "\r\n if (results[2].charAt(0) == '/') results[2] = results[2].substring(1);" + "\r\n listUrl = listUrl + results[2];" + "\r\n listTextBox.value = listUrl;" + "\r\n __doPostBack('" + this.ClientID + "','');" + "\r\n }" + "\r\n LaunchPickerTreeDialog('CbqPickerSelectListTitle','CbqPickerSelectListText','websLists','', serverUrl, lastSelectedListSmtPickerId,'','','/_layouts/images/smt_icon.gif','', callback);" + "\r\n }"; this.Page.ClientScript.RegisterClientScriptBlock(typeof(ListPickerEditorPart), "mso_launchListSmtPicker", script, true); if ((!string.IsNullOrEmpty(_txtListUrl.Text) && _ddlViews.Items.Count == 0) || _listSelectionChanged) { _ddlViews.Items.Clear(); if (!string.IsNullOrEmpty(_txtListUrl.Text)) { using (SPWeb web = SPContext.Current.Site.OpenWeb(this.WebId)) { foreach (SPView view in web.Lists[this.ListId].Views) { _ddlViews.Items.Add(new ListItem(view.Title, view.ID.ToString())); } } _ddlViews.Enabled = _ddlViews.Items.Count > 0; } else { _ddlViews.Enabled = false; } } base.OnPreRender(e); } void _txtListUrl_TextChanged(object sender, EventArgs e) { this.SetPropertiesFromChosenListString(_txtListUrl.Text); _listSelectionChanged = true; } Any ideas? Update: I forgot to mention these methods, which are called above: protected virtual void AddConfigurationOption(string title, Control inputControl) { this.AddConfigurationOption(title, null, inputControl); } protected virtual void AddConfigurationOption(string title, string description, Control inputControl) { this.AddConfigurationOption(title, description, new List<Control>(new Control[] { inputControl })); } protected virtual void AddConfigurationOption(string title, string description, IEnumerable<Control> inputControls) { HtmlGenericControl divSectionHead = new HtmlGenericControl("div"); divSectionHead.Attributes.Add("class", "UserSectionHead"); this.Controls.Add(divSectionHead); HtmlGenericControl labTitle = new HtmlGenericControl("label"); labTitle.InnerHtml = HttpUtility.HtmlEncode(title); divSectionHead.Controls.Add(labTitle); HtmlGenericControl divUserSectionBody = new HtmlGenericControl("div"); divUserSectionBody.Attributes.Add("class", "UserSectionBody"); this.Controls.Add(divUserSectionBody); HtmlGenericControl divUserControlGroup = new HtmlGenericControl("div"); divUserControlGroup.Attributes.Add("class", "UserControlGroup"); divUserSectionBody.Controls.Add(divUserControlGroup); if (!string.IsNullOrEmpty(description)) { HtmlGenericControl spnDescription = new HtmlGenericControl("div"); spnDescription.InnerHtml = HttpUtility.HtmlEncode(description); divUserControlGroup.Controls.Add(spnDescription); } foreach (Control inputControl in inputControls) { divUserControlGroup.Controls.Add(inputControl); } this.Controls.Add(divUserControlGroup); HtmlGenericControl divUserDottedLine = new HtmlGenericControl("div"); divUserDottedLine.Attributes.Add("class", "UserDottedLine"); divUserDottedLine.Style.Add(HtmlTextWriterStyle.Width, "100%"); this.Controls.Add(divUserDottedLine); }

    Read the article

  • Configure Forms based authentication in SharePoint 2010

    - by sreejukg
      Configuring form authentication is a straight forward task in SharePoint. Mostly public facing websites built on SharePoint requires form based authentication. Recently, one of the WCM implementation where I was included in the project team required registration system. Any internet user can register to the site and the site offering them some membership specific functionalities once the user logged in. Since the registration open for all, I don’t want to store all those users in Active Directory. I have decided to use Forms based authentication for those users. This is a typical scenario of form authentication in SharePoint implementation. To implement form authentication you require the following A data store where you are storing the users – technically this can be active directory, SQL server database, LDAP etc. Form authentication will redirect the user to the login page, if the request is not authenticated. In the login page, there will be controls that validate the user inputs against the configured data store. In this article, I am going to use SQL server database with ASP.Net membership API’s to configure form based authentication in SharePoint 2010. This article assumes that you have SQL membership database available. I already configured the membership and roles database using aspnet_regsql command. If you want to know how to configure membership database using aspnet_regsql command, read the below blog post. http://weblogs.asp.net/sreejukg/archive/2011/06/16/usage-of-aspnet-regsql-exe-in-asp-net-4.aspx The snapshot of the database after implementing membership and role manager is as follows. I have used the database name “aspnetdb_claim”. Make sure you have created the database and make sure your database contains tables and stored procedures for membership. Create a web application with claims based authentication. This article assumes you already created a web application using claims based authentication. If you want to enable forms based authentication in SharePoint 2010, you must enable claims based authentication. Read this post for creating a web application using claims based authentication. http://weblogs.asp.net/sreejukg/archive/2011/06/15/create-a-web-application-in-sharepoint-2010-using-claims-based-authentication.aspx  You make sure, you have selected enable form authentication, and then selected Membership provider and Role manager name. To make sure you are done with the configuration, navigate to central administration website, from central administration, navigate to the Web Applications page, select the web application and click on icon, you will see the authentication providers for the current web application. Go to the section Claims authentication types, and make sure you have enabled forms based authentication. As mentioned in the snapshot, I have named the membership provider as SPFormAuthMembership and role manager as SPFormAuthRoleManager. You can choose your own names as you need. Modify the configuration files(Web.Config) to enable form authentication There are three applications that needs to be configured to support form authentication. The following are those applications. Central Administration If you want to assign permissions to web application using the credentials from form authentication, you need to update Central Administration configuration. If you do not want to access form authentication credentials from Central Administration, just leave this step.  STS service application Security Token service is the service application that issues security token when users are logging in. You need to modify the configuration of STS application to make sure users are able to login. To find the STS application, follow the following steps Go to the IIS Manager Expand the sites Node, you will see SharePoint Web Services Expand SharePoint Web Services, you can see SecurityTokenServiceApplication Right click SecuritytokenServiceApplication and click explore, it will open the corresponding file system. By default, the path for STS is C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\14\WebServices\SecurityToken You need to modify the configuration file available in the mentioned location. The web application that needs to be enabled with form authentication. You need to modify the configuration of your web application to make sure your web application identifies users from the form authentication.   Based on the above, I am going to modify the web configuration. At end of each step, I have mentioned the expected output. I recommend you to go step by step and after each step, make sure the configuration changes are working as expected. If you do everything all together, and test your application at the end, you may face difficulties in troubleshooting the configuration errors. Modifications for Central Administration Web.Config Open the web.config for the Central administration in a text editor. I always prefer Visual Studio, for editing web.config. In most cases, the path of the web.config for the central administration website is as follows C:\inetpub\wwwroot\wss\VirtualDirectories\<port number> Make sure you keep a backup copy of the web.config, before editing it. Let me summarize what we are going to do with Central Administration web.config. First I am going to add a connection string that points to the form authentication database, that I created as mentioned in previous steps. Then I need to add a membership provider and a role manager with the corresponding connectionstring. Then I need to update the peoplepickerwildcards section to make sure the users are appearing in search results. By default there is no connection string available in the web.config of Central Administration. Add a connection string just after the configsections element. The below is the connection string I have used all over the article. <add name="FormAuthConnString" connectionString="Initial Catalog=yourdatabasename;data source=databaseservername;Integrated Security=SSPI;" /> Once you added the connection string, the web.config look similar to Now add membership provider to the code. In web.config for CA, there will be <membership> tag, search for it. You will find membership and role manager under the <system.web> element. Under the membership providers section add the below code… <add name="SPFormAuthMembership" type="System.Web.Security.SqlMembershipProvider, System.Web, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" applicationName="FormAuthApplication" connectionStringName="FormAuthConnString" /> After adding memberhip element, see the snapshot of the web.config. Now you need to add role manager element to the web.config. Insider providers element under rolemanager, add the below code. <add name="SPFormAuthRoleManager" type="System.Web.Security.SqlRoleProvider, System.Web, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" applicationName="FormAuthApplication" connectionStringName="FormAuthConnString" /> After adding, your role manager will look similar to the following. As a last step, you need to update the people picker wildcard element in web.config, so that the users from your membership provider are available for browsing in Central Administration. Search for PeoplePickerWildcards in the web.config, add the following inside the <PeoplePickerWildcards> tag. <add key="SPFormAuthMembership" value="%" /> After adding this element, your web.config will look like After completing these steps, you can browse the users available in the SQL server database from central administration website. Go to the site collection administrator’s page from central administration. Select the site collection you have created for form authentication. Click on the people picker icon, choose Forms Auth and click on the search icon, you will see the users listed from the SQL server database. Once you complete these steps, make sure the users are available for browsing from central administration website. If you are unable to find the users, there must be some errors in the configuration, check windows event logs to find related errors and fix them. Change the web.config for STS application Open the web.config for STS application in text editor. By default, STS web.config does not have system.Web or connectionstrings section. Just after the System.Webserver element, add the following code. <connectionStrings> <add name="FormAuthConnString" connectionString="Initial Catalog=aspnetdb_claim;data source=sp2010_db;Integrated Security=SSPI;" /> </connectionStrings> <system.web> <roleManager enabled="true" cacheRolesInCookie="false" cookieName=".ASPXROLES" cookieTimeout="30" cookiePath="/" cookieRequireSSL="false" cookieSlidingExpiration="true" cookieProtection="All" createPersistentCookie="false" maxCachedResults="25"> <providers> <add name="SPFormAuthRoleManager" type="System.Web.Security.SqlRoleProvider, System.Web, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" applicationName="FormAuthApplication" connectionStringName="FormAuthConnString" /> </providers> </roleManager> <membership userIsOnlineTimeWindow="15" hashAlgorithmType=""> <providers> <add name="SPFormAuthMembership" type="System.Web.Security.SqlMembershipProvider, System.Web, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" applicationName="FormAuthApplication" connectionStringName="FormAuthConnString" /> </providers> </membership> </system.web> See the snapshot of the web.config after adding the required elements. After adding this, you should be able to login using the credentials from SQL server. Try assigning a user as primary/secondary administrator for your site collection from Central Administration and login to your site using form authentication. If you made everything correct, you should be able to login. This means you have successfully completed configuration of STS Configuration of Web Application for Form Authentication As a last step, you need to modify the web.config of the form authentication web application. Once you have done this, you should be able to grant permissions to users stored in the membership database. Open the Web.config of the web application you created for form authentication. You can find the web.config for the application under the path C:\inetpub\wwwroot\wss\VirtualDirectories\<port number> Basically you need to add connection string, membership provider, role manager and update the people picker wild card configuration. Add the connection string (same as the one you added to the web.config in Central Administration). See the screenshot after the connection string has added. Search for <membership> in the web.config, you will find this inside system.web element. There will be other providers already available there. You add your form authentication membership provider (similar to the one added to Central Administration web.config) to the provider element under membership. Find the snapshot of membership configuration as follows. Search for <roleManager> element in web.config, add the new provider name under providers section of the roleManager element. See the snapshot of web.config after new provider added. Now you need to configure the peoplepickerwildcard configuration in web.config. As I specified earlier, this is to make sure, you can locate the users by entering a part of their username. Add the following line under the <PeoplePickerWildcards> element in web.config. See the screenshot of the peoplePickerWildcards element after the element has been added. Now you have completed all the setup for form authentication. Navigate to the web application. From the site actions -> site settings -> go to peope and groups Click on new -> add users, it will popup the people picker dialog. Click on the icon, select Form Auth, enter a username in the search textbox, and click on search icon. See the screenshot of admin search when I tried searching the users If it displays the user, it means you are done with the configuration. If you add users to the form authentication database, the users will be able to access SharePoint portal as normal.

    Read the article

< Previous Page | 517 518 519 520 521 522 523 524 525 526 527 528  | Next Page >