Search Results

Search found 71953 results on 2879 pages for 'work environment'.

Page 448/2879 | < Previous Page | 444 445 446 447 448 449 450 451 452 453 454 455  | Next Page >

  • Macbook Pro 8,2 Graphics switching - Ubuntu 12.04

    - by fgs
    I've been reading docs and various pages for a few hours now and can't seem to put all of the pieces together on this. Basically I am trying to get 12.04 installed on my MBP 8,2 with graphics card switching working in some way or another. My basic understanding is that I need to do an EFI boot install of ubuntu so that graphics card switching will work (due to the hardware design). From there I may be able to use one of the kernel modules for graphics switching: https://help.ubuntu.com/community/HybridGraphics That article isn't clear on whether I need to do an EFI install. I have also seen comments in posts here that say and EFI install works by default as long as you have refit installed. Overall, I'm quite lost as to the simplest way to proceed to get an install up and running with graphics switching. I don't mind using open source GFX drivers as long as the basics work. Any help towards a solution is greatly appreciated.

    Read the article

  • How to enable true remote login

    - by Scán
    I don't quite know how these things are called, so a search did not product any help. I've got two computers, a desktop and a netbook. The netbook is really weak, and there's hardly any fun doing work with it, especially after ubuntu software swallows so much cpu power for nothing. But my desktop is good, but uncomfortably positioned. So I know you can use any linux system as a server to give logins. I want to be able to login and work on my desktop, from my netbook. No VNC, no SSH, full X-server, I want to be able to choose "Login on Desktop" in my login menu on the netbook and have everything as if I was there. I hope I could make my point. Is it possible in a local network? And if so, how can I easily set it up?

    Read the article

  • How To Get SSH Command-Line Access to Windows 7 Using Cygwin

    - by YatriTrivedi
    Are you comfortable with Linux/Unix and want SSH access to your Windows 7 machine? Cygwin provides this functionality and gives you a familiar environment to work with in a few simple steps. We’re assuming you’ve got Cygwin installed and configured. If not, check out our article, How To Use Linux Commands in Windows with Cygwin to get started Latest Features How-To Geek ETC How To Create Your Own Custom ASCII Art from Any Image How To Process Camera Raw Without Paying for Adobe Photoshop How Do You Block Annoying Text Message (SMS) Spam? How to Use and Master the Notoriously Difficult Pen Tool in Photoshop HTG Explains: What Are the Differences Between All Those Audio Formats? How To Use Layer Masks and Vector Masks to Remove Complex Backgrounds in Photoshop Bring Summer Back to Your Desktop with the LandscapeTheme for Chrome and Iron The Prospector – Home Dash Extension Creates a Whole New Browsing Experience in Firefox KinEmote Links Kinect to Windows Why Nobody Reads Web Site Privacy Policies [Infographic] Asian Temple in the Snow Wallpaper 10 Weird Gaming Records from the Guinness Book

    Read the article

  • How do I run a 64-bit guest in VirtualBox?

    - by ændrük
    I would like to have an Ubuntu 11.04 64-bit test environment. When I try booting the Ubuntu 11.04 64-bit installation CD in VirtualBox, the following message is displayed by VirtualBox: VT-x/AMD-V hardware acceleration has been enabled, but is not operational. Your 64-bit guest will fail to detect a 64-bit CPU and will not be able to boot. Please ensure that you have enabled VT-x/AMD-V properly in the BIOS of your host computer. What am I doing wrong? Details: VBox.log, ubuntu-test.vbox, and /proc/cpuinfo. Kernel: Linux aux 2.6.38-8-generic #42-Ubuntu SMP Mon Apr 11 03:31:24 UTC 2011 x86_64 x86_64 x86_64 GNU/Linux The Virtualization setting in the BIOS is set to Enabled.

    Read the article

  • Vdpau performance in Precise with Unity 3d

    - by bowser
    vdpau seems to be broken in Precise under Unity 3d. CPU usage ranges around 50-70% for 1080p movies while same movies utilizes around 5-10% in Natty with vdpau enabled (under Unity3d) The card is Nvidia G105m. It doesn't seem to be a Nvidia driver problem because in gnome-shell everything works as expected and I have tried different versions of Nvidia drivers (295.20, 295.33, 295.40 and the latest 302.XX from xorg-edgers) The results are all the same, works in Gnome Shell but not in Unity 3d. Disabling syn to vbank works if movie is not in full screen mode, but it doesn't work for full screen. I have searched around and haven't found much info. I am wondering if others are experiencing the same problem and if there are some known work around that I have missed. Unity 3d is otherwise very nice in Precise, but this is a show stopping issue for me (literally). Thanks. I have filed a bug here https://bugs.launchpad.net/unity/+bug/993397

    Read the article

  • Trouble launch byobu with custom windows.NAME file

    - by Brendan Piater
    Apologies if the solution is obvious but I'm stumped! I want to launch byobu from within my Gnome 3 desktop environment on 12.04 but it's not working for me as yet. Here's what I have: created a windows list file ~/.byobu/windows.mrb then tried to launch byobu from within the gnome terminal with: $ BYOBU_WINDOWS=mrb byobu But it gives me the default windows list, NOT my windows list. I followed these instructions from the wiki: http://goo.gl/omgl8 Here is a sanitised version of my windows.mrb file: screen -t local bash screen -t name1 ssh xxx@xxx screen -t name2 ssh root@xxx screen -t name3 bash Appreciate the time and effort. Cheers Brendan

    Read the article

  • Creating the Business Card Request InfoPath Form

    - by JKenderdine
    Business Card Request Demo Files Back in January I spoke at SharePoint Saturday Virginia Beach about InfoPath forms and Web Part deployment.  Below is some of the information and details regarding the form I created for the session.  There are many blogs and Microsoft articles on how to create a basic form so I won’t repeat that information here.   This blog will just explain a few of the options I chose when creating the solutions for SPS Virginia Beach.  The above link contains the zipped package files of the two InfoPath forms(no code solution and coded solution), the list template for the Location list I used, and the PowerPoint deck.  If you plan to use these templates, you will need to update the forms to work within your own environments (change data connections, code links, etc.).  Also, you must have the SharePoint Enterprise version, with InfoPath Services configured in order to use the Web Browser enabled forms. So what are the requirements for this template? Business Card Request Form Template Design Plan: Gather user information and requirements for card Pull in as much user information as possible. Use data from the user profile web services as a data source Show and hide fields as necessary for requirements Create multiple views – one for those submitting the form and Another view for the executive assistants placing the orders. Browser based form integrated into SharePoint team site Submitted directly to form library The base form was created using the blank template.  The table and rows were added using Insert tab and selecting Custom Table.  The use of tables is a great way to make sure everything lines up.  You do have to split the tables from time to time.  If you’ve ever split cells and then tried to re-align one to find that you impacted the others, you know why.  Here is what the base form looks like in InfoPath.   Show and hide fields as necessary for requirements You will notice I also used Sections within the form.  These show or hide depending on options selected or whether or not fields are blank.  This is a great way to prevent your users from feeling overwhelmed with a large form (this one wouldn’t apply).  Although not used in this one, you can also use various views with a tab interface.  I’ll show that in another post. Gather user information and requirements for card Pull in as much user information as possible. Use data from the user profile web services as a data source Utilizing rules you can load data when the form initiates (Data tab, Form Load).  Anything you can automate is always appreciated by the user as that is data they don’t have to enter.  For example, loading their user id or other user information on load: Always keep in mind though how much data you load and the method for loading that data (through rules, code, etc.).  They have an impact on form performance.  The form will take longer to load if you bring in a ton of data from external sources.  Laura Rogers has a great blog post on using the User Information List to load user information.   If the user has logged into SharePoint, then this can be used quite effectively and without a huge performance hit.   What I have found is that using the User Profile service via code behind or the Web Service “GetUserProfileByName” (as above) can take more time to load the user data.  Just food for thought. You must add the data connection in order for the above rules to work.  You can connect to the data connection through the Data tab, Data Connections or select Manage Data Connections link which appears under the main data source.  The data connections can be SharePoint lists or libraries, SQL data tables, XML files, etc.  Create multiple views – one for those submitting the form and Another view for the executive assistants placing the orders. You can also create multiple views for the users to enhance their experience.  Once they’ve entered the information and submitted their request for business cards, they don’t really need to see the main data input screen any more.  They just need to view what they entered. From the Page Design tab, select New View and give the view a name.  To review the existing views, click the down arrow under View: The ReviewView shows just what the user needs and nothing more: Once you have everything configured, the form should be tested within a Test SharePoint environment before final deployment to production.  This validates you don’t have any rules or code that could impact the server negatively. Submitted directly to form library   You will need to know the form library that you will be submitting to when publishing the template.  Configure the Submit data connection to connect to this library.  There is already one configured in the sample,  but it will need to be updated to your environment prior to publishing. The Design template is different from the Published template.  While both have the .XSN extension, the published template contains all the “package” information for the form.  The published form is what is loaded into Central Admin, not the design template. Browser based form integrated into SharePoint team site In Central Admin, under General Settings, select Manage Form Templates.  Upload the published form template and Activate it to a site collection. Now it is available as a content type to select in the form library.  Some documentation on publishing form templates:  Technet – Manage administrator approved form templates And that’s all our base requirements.  Hope this helps to give a good start.

    Read the article

  • Establishing relationships with unsolicited recruiters

    - by Michael
    Several times each year, I receive unsolicited introductions from tech recruiters, usually via LinkedIn and usually local firms. I am not currently looking for a new job. Is it advisable to establish a relationship with one or more recruiters when I'm not interested in finding new work, so that they have my resume on file? Here's another way I approach the question: My plumber was first hired at the point of need: I had a plumbing problem, looked up a few of them, and evaluated and hired based on their demeanor and cost estimates. I established a relationship with a general attorney, on the other hand, well in advance of ever actually needing services so that if or when services are needed he already knows enough about me to begin work. Should I approach recruiters like I approached my plumber or my lawyer? A separate discussion, I suppose, is whether or not the type of recruiters who troll LinkedIn for clients are generally helpful or not. Edit: I have never worked with a recruiter before, and therefore have little idea what to expect.

    Read the article

  • why not use unmanaged safe code in c#

    - by user613326
    There is an option in c# to execute code unchecked. It's generally not advised to do so, as managed code is much safer and it overcomes a lot of problems. However I am wondering, if you're sure your code won't cause errors, and you know how to handle memory then why (if you like fast code) follow the general advice? I am wondering this since I wrote a program for a video camera, which required some extremely fast bitmap manipulation. I made some fast graphical algorithms myself, and they work excellent on the bitmaps using unmanaged code. Now I wonder in general, if you're sure you don't have memory leaks, or risks of crashes, why not use unmanaged code more often ? PS my background: I kinda rolled into this programming world and I work alone (I do so for a few years) and so I hope this software design question isn't that strange. I don't really have other people out there like a teacher to ask such things.

    Read the article

  • Fast pixelshader 2D raytracing

    - by heishe
    I'd like to do a simple 2D shadow calculation algorithm by rendering my environment into a texture, and then use raytracing to determine what pixels of the texture are not visible to the point light (simply handed to the shader as a vec2 position) . A simple brute force algorithm per pixel would looks like this: line_segment = line segment between current pixel of texture and light source For each pixel in the texture: { if pixel is not just empty space && pixel is on line_segment output = black else output = normal color of the pixel } This is, of course, probably not the fastest way to do it. Question is: What are faster ways to do it or what are some optimizations that can be applied to this technique?

    Read the article

  • Firefox, Chrome, and Flash on Ubuntu

    - by Zimmer
    Ok I have recently run into some problems and was hoping you guys could help; 1) On Chrome sometimes when I play a video (even on Youtube) the audio won't work (yet other apps audio will work) but after pressing the play button (pausing and unpausing the video) it finally works but if I pause the video and click play it goes back to not working until I re-do that process. 2) When I go to play videos in firefox or go to grooveshark it says I don't have flash; but I do and when I go to install flash it says I have the LAST version for linux but flash works on Chrome fine (well except the audio problem above which annoys me to no end!)

    Read the article

  • Multiple volumetric lights

    - by notabene
    I recently read this GPU GEMS 3 article Volumetric Light Scattering as a Post-Process. I like the idea to add volumetric light property to realtime render i'm working on. Question is will it work for multiple lights? Our renderer uses one render pass per light and uses additive blending to sum incoming light. I'm mostly convinced that it have to work nice. Do you agree? Maybe there can be problem where light rays crosses each other.

    Read the article

  • How can I copy/paste files via RDP in Kubuntu?

    - by Dai
    I recently installed the latest Kubuntu (x64) on my work machine as I am trying to migrate away from Windows. Unfortunately I use RDP very frequently to connect to customer's servers and need to be able to copy files across. I have tried the following packages with no luck: remmina rdesktop xfreerdp My latest attempt to solve this involved connecting one of my folders to the remote server, here is the command I used to launch rdesktop: rdesktop -5 -K -r disk:home=/home/dai -r clipboard:CLIPBOARD -r sound:off -x l -P 192.168.0.2 -u "administrator" -p pass The servers are not all running the same version of Windows, the one I've been trying so far is running Server 2003 R2. Customer servers range from Server 2000 to Server 2008. I've been Googling this like mad but all the solutions I find seem to fail, maybe because almost all the help out there assumes that I'm running Gnome. Sorry if this is a stupid question. Thanks in advance for your help. Edit: Copying and pasting text seems to work just fine, but that's not what I need.

    Read the article

  • suspend resume not working on HP dv7

    - by Emiel
    this one is driving me nuts. My HP dv7 laptop isn't resuming from suspend and hybernate. On suspend - resume it leaves me with a black screen. On hibernate it succesfully loads the images and then it hangs.... I searched through internet and tried serveral things, but nothing seems to work for this HP dv7 on Ubuntu 12.04. With 11.10 it didn't work either. Intel® Core™ i5 CPU M 450 @ 2.40GHz × 4 VESA: Intel®Ironlake Mobile Graphics 64-bit Hope someone knows where to start the search.. I'm really desperate.

    Read the article

  • Skype Video not working 13.04 & 13.10?

    - by MrA
    I am trying to fix the issue with the video calls. I can check the video from the option menu, it works fine. But it does not work while calling someone. It shows video disabled. How can i fix this issue? I am using Ubuntu 13.10 and skype version 4.2.0.11 I checked all sites including below links but could not find solution: Skype no video in 13.04 Skype Video Call Not Working Ubuntu 11.10 No video skype found Video doesn't work well in skype http://www.youtube.com/watch?v=WeaLCzB4qy8

    Read the article

  • Organizing Git repositories with common nested sub-modules

    - by André Caron
    I'm a big fan of Git sub-modules. I like to be able to track a dependency along with its version, so that you can roll-back to a previous version of your project and have the corresponding version of the dependency to build safely and cleanly. Moreover, it's easier to release our libraries as open source projects as the history for libraries is separate from that of the applications that depend on them (and which are not going to be open sourced). I'm setting up workflow for multiple projects at work, and I was wondering how it would be if we took this approach a bit of an extreme instead of having a single monolithic project. I quickly realized there is a potential can of worms in really using sub-modules. Supposing a pair of applications: studio and player, and dependent libraries core, graph and network, where dependencies are as follows: core is standalone graph depends on core (sub-module at ./libs/core) network depdends on core (sub-module at ./libs/core) studio depends on graph and network (sub-modules at ./libs/graph and ./libs/network) player depends on graph and network (sub-modules at ./libs/graph and ./libs/network) Suppose that we're using CMake and that each of these projects has unit tests and all the works. Each project (including studio and player) must be able to be compiled standalone to perform code metrics, unit testing, etc. The thing is, a recursive git submodule fetch, then you get the following directory structure: studio/ studio/libs/ (sub-module depth: 1) studio/libs/graph/ studio/libs/graph/libs/ (sub-module depth: 2) studio/libs/graph/libs/core/ studio/libs/network/ studio/libs/network/libs/ (sub-module depth: 2) studio/libs/network/libs/core/ Notice that core is cloned twice in the studio project. Aside from this wasting disk space, I have a build system problem because I'm building core twice and I potentially get two different versions of core. Question How do I organize sub-modules so that I get the versioned dependency and standalone build without getting multiple copies of common nested sub-modules? Possible solution If the the library dependency is somewhat of a suggestion (i.e. in a "known to work with version X" or "only version X is officially supported" fashion) and potential dependent applications or libraries are responsible for building with whatever version they like, then I could imagine the following scenario: Have the build system for graph and network tell them where to find core (e.g. via a compiler include path). Define two build targets, "standalone" and "dependency", where "standalone" is based on "dependency" and adds the include path to point to the local core sub-module. Introduce an extra dependency: studio on core. Then, studio builds core, sets the include path to its own copy of the core sub-module, then builds graph and network in "dependency" mode. The resulting folder structure looks like: studio/ studio/libs/ (sub-module depth: 1) studio/libs/core/ studio/libs/graph/ studio/libs/graph/libs/ (empty folder, sub-modules not fetched) studio/libs/network/ studio/libs/network/libs/ (empty folder, sub-modules not fetched) However, this requires some build system magic (I'm pretty confident this can be done with CMake) and a bit of manual work on the part of version updates (updating graph might also require updating core and network to get a compatible version of core in all projects). Any thoughts on this?

    Read the article

  • How to port animation from one skeleton to another?

    - by shawn
    While I need to do this in a Blender3D modeler script, the math should be similar for other modelers or realtime engines. Blender3D specific terminology: Armature = skeleton EditBone = rest pose bone (stores the rest pose matrix) PoseBone = can store a different pose (animation matrix) for each frame of your animation I need to share animations (Blender Actions) between Armatures which have EditBones with same names and which have the same positions, but can have different (rest pose) angles and scales. Plus the Armatures might have different bone hierarchy (bone parenting/ no bone parenting). Why I need this: I've made an importer/exporter for a 3d format for a game. The format doesn't store enough info to connect/parent the bones, which makes posing/animating character models in a 3d modeller nearly impossible (original model files for the 3d modeler don't exist, this is for modding). As there are only 2 character skeleton types in the game, I decided to optionally allow to generate the bone from a hardcoded data in the model importer and undo that in the exporter. This allows to easily pose the model for checking weights, easily create weights, makes it easier for Blender to generate automatic weights and of course makes animating possible. This worked perfectly: the importer optionally generated the Armature itself and the exporter removed those changes, so the exported model works with existing animations in the game. But now I'm writing an importer and exporter for the game's animation format and here come the problems of: Trying to make original animations work in Blender with my "custom" (modified) Armature Trying to make animations created by using the "custom" (modified) Armature work with the original models in the game (and Blender). Constraints or bone snapping inside Blender won't work as they don't care that the bones have different angles in the rest pose, they will still face the same direction. It seems I just need to get the "difference" between the EditBone matrices of all EditBones for the two Armatures somehow and apply that difference to PoseBone matrices of all PoseBones, for all frames of my animation. I need to know how to get that difference and how to apply it. BTW, PoseBone matrices are relative to rest pose, they are by default [1.000000, 0.000000, 0.000000, 0.000000](matrix [row 0]) [0.000000, 1.000000, 0.000000, 0.000000](matrix [row 1]) [0.000000, 0.000000, 1.000000, 0.000000](matrix [row 2]) [0.000000, 0.000000, 0.000000, 1.000000](matrix [row 3]) So the question is: How to get the difference between two bone (EditBone) matrices to apply that difference to the animation matrices (PoseBone matrices)? Please be easy on the matrix math.

    Read the article

  • How can I host a website on a dynamically-assigned IP address?

    - by nick
    I recently upgraded my internet to the point that it is much faster and more reliable than my current webhost. I would like to move my current domain to be hosted at home, but my IP address is dynamic. As far as I know, I only get a new IP when I restart my modem and or router (which is almost never) or when cable one (my ISP) pushes out a firmware update (rarely). There are a few ways I can see doing this: Convince my ISP to give me a static IP Assign my router my current IP to force a static IP (which might work?) Set my DNS record to my current IP address and update it on the rare occasions that it changes. Obviously I'm hoping that the first one works, but I don't want to pay a lot of extra money (if that's what it takes) to get a static IP address. Which of these options will work most reliably?

    Read the article

  • Cloud INaaS from Data Integration companies

    - by llaszews
    Traditional integration IT vendors are also starting to offer INaaS. Infomatica has been the most aggressive integration vendor when it comes to offering INaaS. Informatica has offered INaaS for over five years and continues to add capabilities, has a number of high profile references, and also continues to add out-of-the-box cloud integration with major COTS and SaaS providers. The Informatica Marketplace contains pre-packaged Informatica Cloud end-points and plug-ins. One such MarketPlace solution, is integration with Oracle E-Business Suite using Informatica integration. The Informatica E-Business Suite INaaS offering includes automatic loading and extraction of data between Salesforce CRM and on-premise systems, cloud-to-cloud, flat files, and relational database. The entire Informatica Cloud integration solution runs in an Informatica managed facility (PaaS). When running in a PaaS environment, Informatica offers an option to keep an exact copy of your cloud-based data on-premise for archival, compliance, and enterprise reporting requirements.

    Read the article

  • Mouse scroll issue after kernel build

    - by Anish S Kumar
    I have a Intel Cedar Trail netbook. For graphics to work, i had to build kernel 3.1 with the drivers. I followed the steps in this document After doing that, now my graphics is fine, but my mouse scroll does not work. Is that because I have not build the kernel properly? Have i missed selecting some options in the kernel compile menu? It will be nice if someone can help me. Also my wacom bamboo tablet is not recognized, i have installed the xserver-xorg-input-wacom drivers.

    Read the article

  • Bunny Inc. Season 2: Optimize Your Enterprise Content

    - by kellsey.ruppel
    In a business environment largely driven by informal exchanges, digital assets and peer-to-peer interactions, turning unstructured content into an enterprise-wide resource is the key to gain organizational agility and reduce IT costs. To get their work done, business users demand a unified, consolidated and secure repository to manage the entire life cycle of content and deliver it in the proper format.At Hare Inc., finding information turns to be a daunting and error-prone task. On the contrary, at Bunny Inc., Mr. CIO knows the secret to reach the right carrot! Have a look at the third episode of the Social Bunnies Season 2 to discover how to reduce resource bottlenecks, maximize content accessibility and mitigate risk.

    Read the article

  • Updating a database connection password using a script

    - by Tim Dexter
    An interesting customer requirement that I thought was worthy of sharing today. Thanks to James for the requirement and Bryan for the proposed solution and me for testing the solution and proving it works :0) A customers implementation of Sarbanes Oxley requires them to change all database account passwords every 90 days. This is scripted leveraging shell scripts today for most of their environments. But how can they manage the BI Publisher connections? Now, the customer is running 11g and therefore using weblogic on the middle tier, which is the first clue to Bryans proposed solution. To paraphrase and embellish Bryan's solution a little; why not use a JNDI connection from BIP to the database. Then employ the web logic scripting engine to make updates to the JNDI as needed? BIP is completely uninvolved and with a little 'timing' users will be completely unaware of the password updates i.e. change the password when reports are not being executed. Perfect! James immediately tracked down the WLST script that could be used here, http://middlewaremagic.com/weblogic/?p=4261 (thanks Ravish) Now it was just a case of testing the theory. Some steps: Create the JNDI connection in WLS Create the JNDI connection in BI Publisher pointing to the WLS connection Build new data models using or re-point data sources to use the JNDI connection. Create the WLST script to update the WLS JNDI password as needed. Test! Some details. Creating the JNDI connection in web logic is pretty straightforward. Log into hte console and look for Data Sources under the Services section of the home page and click it Click New >> Generic Datasource Give the connection a name. For the JNDI name, prefix it with 'jdbc/' so I have 'jdbc/localdb' - this name is important you'll need it on the BIP side. Select your db type - this will influence the drivers and information needed on the next page. Being a company man, Im using an Oracle db. Click Next Select the driver of choice, theres lots I know, you can read about them I just chose 'Oracle's Driver (Thin) for Instance connections; Versions 9.0.1 and later' Click Next >> Next Fill out the db name (SID), server, port, username to connect and password >> Next Test the config to ensure you can connect. >> Next Now you need to deploy the connection to your BI server, select it and click Next. You're done with the JNDI config. Creating the JNDI connection on the Publisher side is covered here. Just remember to the connection name you created in WLS e.g. 'jdbc/localdb' Not gonna tell you how to do this, go read the user guide :0) Suffice to say, it works. This requires a little reading around the subject to understand the scripting engine and how to execute scripts. Nicely covered here. However a bit of googlin' and I found an even easier way of running the script. ${ServerHome}/common/bin/wlst.sh updatepwd.py Where updatepwd.py is my script file, it can be in another directory. As part of the wlst.sh script your environment is set up for you so its very simple to execute. The nitty gritty: Need to take Ravish's script above and create a file with a .py extension. Its going to need some modification, as he explains on the web page, to make it work in your environment. I played around with it for a while but kept running into errors. The script as is, tries to loop through all of your connections and modify the user and passwords for each. Not quite what we are looking for. Remember our requirement is to just update the password for a given connection. I also found another issue with the script. WLS 10.x does not allow updates to passwords using clear type ie un-encrypted text while the server is in production mode. Its a bit much to set it back to developer mode bounce it, change the passwords and then bounce and then change back to production and bounce again. After lots of messing about I finally came up with the following: ############################################################################# # # Update password for JNDI connections # ############################################################################# print("*** Trying to Connect.... *****") connect('weblogic','welcome1','t3://localhost:7001') print("*** Connected *****") edit() startEdit() print ("*** Encrypt the password ***") en = encrypt('hr') print "Encrypted pwd: ", en print ("*** Changing pwd for LocalDB ***") dsName = 'LocalDB' print 'Changing Password for DataSource ', dsName cd('/JDBCSystemResources/'+dsName+'/JDBCResource/'+dsName+'/JDBCDriverParams/'+dsName) set('PasswordEncrypted',en) save() activate() Its pretty simple and you can expand on it to loop through the data sources and change each as needed. I have hardcoded the password into the file but you can pass it as a parameter as needed using the properties file method. Im not going to get into the detail of that here but its covered with an example here. Couple of points to note: 1. The change to the password requires a server bounce to get the changes picked up. You can add that to the shell script you will use to call the script above. 2. The script above needs to be run from the MW_HOME\user_projects\domains\bifoundation_domain directory to get the encryption libraries set correctly. My command to run the whole script was: d:\oracle\bi_mw\wlserver_10.3\common\bin\wlst.cmd updatepwd.py - where wlst.cmd is the scripting command line and updatepwd.py was my update password script above. I have not quite spoon fed everything you need to make it a robust script but at least you know you can do it and you can work out the rest I think :0)

    Read the article

  • Remote Display Config.sh Using SSH

    - by john.graves(at)oracle.com
    How often I see people look to VNC, NXMachine, RDP, etc to get a windowing environment on a remote system.  These products are great and I use them too, but there is a fancy feature in SSH to help. ssh –X remoteserver This is a great feature for hooking into headless VirtualBox machines and remote displaying an install wizard. The remote server must have some lines put in the /etc/ssh/sshd_conf file: X11Forwarding yes X11DisplayOffset 10 The second line is optional, but the first is required.  Restart sshd (sudo /etc/init.d/ssh restart). Now I can ssh –X remote server Then run /opt/app/wls10.3.4/wlserver_10.3/common/bin/config.sh to build a new domain. Note: For some reason, the jdk that comes with WebLogic often fails to work on the remote display.  In that case, I modify the config.sh to just use /usr/bin/java (from openjdk-6-jre package).

    Read the article

  • C++ : Lack of Standardization at the Binary Level

    - by Nawaz
    Why ISO/ANSI didn't standardize C++ at the binary level? There are many portability issues with C++, which is only because of lack of it's standardization at the binary level. Don Box writes, (quoting from his book Essential COM, chapter COM As A Better C++) C++ and Portability Once the decision is made to distribute a C++ class as a DLL, one is faced with one of the fundamental weaknesses of C++, that is, lack of standardization at the binary level. Although the ISO/ANSI C++ Draft Working Paper attempts to codify which programs will compile and what the semantic effects of running them will be, it makes no attempt to standardize the binary runtime model of C++. The first time this problem will become evident is when a client tries to link against the FastString DLL's import library from a C++ developement environment other than the one used to build the FastString DLL. Are there more benefits Or loss of this lack of binary standardization?

    Read the article

  • How to enable Mic in Wine (pulseaudio)

    - by Postadelmaga
    I'm not able to make mic work with wine 1.5 I found some suggestions but they don't seem to work: -Disable winepulse.drv (aka: PulseAudio): "A1: Go to winecfg, in the tab 'libraries', enter 'winepulse.drv' in the box and click on add, click yes on the warning. Select winepulse.drv and change load order to 'disabled'. Now try to trigger your bug again, if it was a winepulse bug it shouldn't trigger any more. A2 (easier and preferred): From 1.5.3 onward, launch your program with WINENOPULSE=1 wine program.exe to temporarily disable winepulse for that program." reference: http://ubuntuforums.org/showthread.php?t=1960599

    Read the article

< Previous Page | 444 445 446 447 448 449 450 451 452 453 454 455  | Next Page >