Search Results

Search found 21343 results on 854 pages for 'pass by reference'.

Page 151/854 | < Previous Page | 147 148 149 150 151 152 153 154 155 156 157 158  | Next Page >

  • Renderbuffer to GLSL shader?

    - by Dan
    I have a software that performs volume rendering through a raycasting approach. The actual raycasting shader writes the raycasted volume depth into a framebuffer object, through gl_FragDepth, that I bind before calling the shader. The problem I have is that I would like to use this depth in another shader that I call later on. I figured out that the only way to do that is to bind the framebuffer once the raycasting has finished, read the depthmap through something like glReadPixels(0, 0, m_winSize.x , m_winSize.y, GL_DEPTH_COMPONENT, GL_FLOAT, pixels); and write it to a 2D texture as usual glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT24, m_winSize.x, m_winSize.y, 0, GL_DEPTH_COMPONENT, GL_FLOAT, pixels) and then pass this 2D texture that contains a simple depth map to the other shader. However, I am not entirely sure that what I do is the proper way to do this. Is there anyway to pass the framebuffer that I fill up in my raycasting shader to the other shader?

    Read the article

  • Landed Cost Management Integration with OPM Financials

    - by Robert Story
    Upcoming WebcastTitle: Landed Cost Management Integration with OPM FinancialsDate: April 21, 2010 Time: 11:00 am EDT, 9:00 am PDT, 8:00 am MDT Product Family: EBS: Process Manufacturing Summary This one-hour session will present setup overview and detailed steps for a test case, and is recommended for functional users who are using OPM Financials module with an actual costing method. Topics will include: Overview on Landed Cost Management functionality Setup steps and a test case Some technical considerations Documentation and other reference materials available A short, live demonstration (only if applicable) and question and answer period will be included. Click here to register for this session....... ....... ....... ....... ....... ....... .......The above webcast is a service of the E-Business Suite Communities in My Oracle Support.For more information on other webcasts, please reference the Oracle Advisor Webcast Schedule.Click here to visit the E-Business Communities in My Oracle Support Note that all links require access to My Oracle Support.

    Read the article

  • How best do you represent a bi-directional sync in a REST api?

    - by Edward M Smith
    Assuming a system where there's a Web Application with a resource, and a reference to a remote application with another similar resource, how do you represent a bi-directional sync action which synchronizes the 'local' resource with the 'remote' resource? Example: I have an API that represents a todo list. GET/POST/PUT/DELETE /todos/, etc. That API can reference remote TODO services. GET/POST/PUT/DELETE /todo_services/, etc. I can manipulate todos from the remote service through my API as a proxy via GET/POST/PUT/DELETE /todo_services/abc123/, etc. I want the ability to do a bi-directional sync between a local set of todos and the remote set of TODOS. In a rpc sort of way, one could do POST /todo_services/abc123/sync/ But, in the "verbs are bad" idea, is there a better way to represent this action?

    Read the article

  • BizTalk 2009 - Creating a Custom Functoid Library

    - by StuartBrierley
    If you find that you have a need to created multiple Custom Functoids you may also choose to create a Custom Functoid Library - a single project containing many custom functoids.  As previsouly discussed, the Custom Functoid Wizard can be used to create a project with a new custom functoid inside.  But what if you want to extend this project to include more custom functoids and create your Custom Functoid Library?  First create a Custom Functoid Library project and your first Custom Functoid using the Custom Functoid Wizard. When you open your Custom Functoid Library project in Visual Studio you will see that it contains your custom functoid class file along with its resource file.  One of the items this resource file contains is the ID of the the custom functoid.  Each custom functoid needs a unique ID that is over 6000.  When creating a Custom Functoid Library I would first suggest that you delete the ID from this resource file and instead create a _FunctoidIDs class containing constants for each of your custom functoids.  In this way you can easily see which custom functoid IDs are assigned to which custom functoid and which ID is next in the sequence of availability: namespace MyCompany.BizTalk.Functoids.TestFunctoids {     class _FunctoidIDs     {         public const int TestFunctoid                       = 6001;     } } You will then need to update the base() function in your existing functoid class to reference these constant values rather than the current resource file. From:    int functoidID;    // This has to be a number greater than 6000    functoidID = System.Convert.ToInt32(resmgr.GetString("FunctoidId"));    this.ID = functoidID; To: this.ID = _FunctoidIDs.TestFunctoid; To create a new custom functoid you can copy the existing custom functoid, renaming the resultant class file as appropriate.  Once it is renamed you will need to change the Class name, ResourceName reference and Base function name in the class code to those of your new custom functoid.  You will also need to create a new constant value in the _FunctoidIDs class and update the ID reference in your code to match this.  Assuming that you need some different functionalty from your new  customfunctoid you will need to check or amend the following in your functoid class file: Min and Max connections Functoid Category Input and Output connection types The parameters and functionality of the Execute function To change the appearance of you new custom functoid you will need to check or amend the following in the functoid resource file: Name Description Tooltip Exception Icon You can change the String values by double clicking the resource file and amending the value fields in the string table. To amend the functoid icon you will need to create a 16x16 bitmap image.  Once you have saved this you are then ready to import it into the functoid resource file.  In Visual Studio change the resource view to images, right click the icon and choose import from file. You have now completed your new custom functoid and created a Custom Functoid Library.  You can test your new library of functoids by building the project, copying the resultant DLL to C:\Program Files\Microsoft BizTalk Server 2009\Developer Tools\Mapper Extensions and then resetting the toolbox in Visual Studio.

    Read the article

  • Will high reputation in Programmers help to get a good job?

    - by Lorenzo
    In reference to this question, do you think that having a high reputation on this site will help to get a good job? Aside silly and humorous questions, on Programmers we can see a lot of high quality theory questions. I think that, if Stack Overflow will eventually evolve in "strictly programming related" (which usually is "strictly coding related"), the questions on Programmers will be much more interesting and meaningful ("Stack Overflow" = "I have this specific coding/implementation issue"; "Programmers" = "Best practices, team shaping, paradigms, CS theory"). So could high reputation on this site help (or at least be a good reference)? And then, more o less than Stack Overflow?

    Read the article

  • How do I recycle an IIS App pool with Powershell?

    - by Ralph Willgoss
    Reference implementation of a Powershell script to recycle app pools, in response to Rick's post:http://www.west-wind.com/weblog/posts/2012/Oct/02/A-tiny-Utility-to-recycle-an-IIS-Application-Pool#    File: RecycleAppPool.ps1#    Author: Ralph Willgoss#    Date: 2nd Oct 2012#    Reference:#    http://stackoverflow.com/questions/198623/how-do-i-recycle-an-iis-apppool-with-powershell# #    Alternative is to create a Process and run the inbuilt vbs:#    C:\WINDOWS\system32\iisapp.vbs => "IIsApp /a DefaultAppPool /r"#   #    Windows 2003 & II6 C:\WINDOWS\system32>cscript.exe iisapp.vbs /a StaticDataAppPool /r#    Windows 2008 IIS7 [tbd]# =============================================================================#    Iniatialise=============================================================================param ( )=============================================================================#   Main=============================================================================Write-OutPut ""Write-OutPut "Starting Recycling App Pool"Write-OutPut ""$appPoolName = "StaticDataAppPool" #$args[0]$appPool = Get-WmiObject -namespace "root\MicrosoftIISv2" -class "IIsApplicationPool"           | Where-Object { $_.Name -eq "W3SVC/APPPOOLS/$appPoolName" }           $appPool.Recycle()Write-OutPut ""Write-OutPut "Finished Recycling App Pool"Write-OutPut ""

    Read the article

  • Etymology of software project names [closed]

    - by Benoit
    I would like to have a reference community wiki here in order to know what etymology software name have or why they are named that way. I was wondering why Imagemagick's mogrify was named this way. Today I wondered the same for Apache Lucene. It would be handy to have a list here. Could we extend such a list? Let me start and let you edit it please. I will ask for this to be community wiki. For each entry please link to an external reference. GNU Emacs: stand for “Editor MACroS”. Apache Lucene: Armenian name Imagemagick mogrify: from “transmogrify”. Thanks.

    Read the article

  • OPN Exchange @ OpenWorld - General Sessions on Sunday, 30. September 15:30 - 16:30 PDT

    - by rituchhibber
    Are you building your personal conference schedule already? Exclusively to partners who registered for the OPN Exchange program. Add the OPN Exchange General Sessions to your agenda, now adding up to total 49 OPN Exchange sessions throughout the week. If you have registered for Oracle OpenWorld and would like to attend these 49 partner-dedicated sessions, just add OPN Exchange to your registration* for just $100 by 28. September 2012. *Note: Upgrades available to all conference passes, except Discover and Exhibitor Staff pass holders. For Discover and Exhibitor Staff pass holders, please contact the Oracle OpenWorld Registration Team at: Tel: +1.650.226.0812 (International) Monday through Friday, 6:00 a.m. to 6:00 p.m. (Pacific time) or by Email at [email protected]

    Read the article

  • Error: type or namespace name 'AssemblyKeyFileAttribute' and 'AssemblyKeyFile' could not be found

    To associate an assembly with a strong key file to store it to GAC, we use should include following line after all the imports and before defing namespace. For VB.NET:  <Assembly: AssemblyKeyFile("c:\path\mykey.snk")> For C#:    [assembly: AssemblyKeyFile(@"c:\path\mykey.snk")] but, you might encounter following two errors at the time of creating Assembly for GAC. 1. The type or namespace name 'AssemblyKeyFileAttribute' could not be found (are you missing a using directive or an assembly reference?) 2. The type or namespace name 'AssemblyKeyFile' could not be found (are you missing a using directive or an assembly reference?) How to resolve these errors: Just include "System.Reflection" namespace. It resolve above two errors. span.fullpost {display:none;}

    Read the article

  • For normal mapping, why can we not simply add the tangent normal to the surface normal?

    - by sebf
    I am looking at implementing bump mapping (which in all implementations I have seen is really normal mapping), and so far all I have read says that to do this, we create a matrix to convert from world-space to tangent-space, in order to transform the lights and eye direction vectors into tangent space, so that the vectors from the normal map may be used directly in place of those passed through from the vertex shader. What I do not understand though, is why we cannot just use the normalised sum of the sampled-normal vector, and the surface-normal? (assuming we already transform and pass through the surface normal for the existing lighting functions) Take the diagram below; the normal is simply the deviation from the 'reference normal' for any given coordinate system, correct? And transforming the surface normal of a mapped surface from world space to tangent space makes it equivalent to the tangent space 'reference normal', no? If so, why do we transform all lighting vectors into tangent space, instead of simply transforming the sampled tangent once in the pixel shader?

    Read the article

  • What method replaces GL_SELECT for box selection?

    - by Jake
    Last 2 weeks I started working on a box selection that selects shapes using GL_SELECT and I just got it working finally. When looking up resources online, there is a significant number of posts that say GL_SELECT is deprecated in OpenGL 3.0, but there is no mention of what had replace that function. I learnt OpenGL 1.2 in back in college 2 years back but checking wikipedia now, I realise we already have OpenGL 4.0 but I am unaware of what I need to do to keep myself up to date. So, in the meantime, what would be the latest preferred method for box selection? EDIT: I found http://www.khronos.org/files/opengl-quick-reference-card.pdf on page 5 this card still lists glRenderMode(GL_SELECT) as part of the OpenGL 3.2 reference.

    Read the article

  • Passing additional parameters to JQuery bind event function

    - by kazim sardar mehdi
    To pass the additional parameter to the event function pass an array of key value, as the second parameter to the bind event bind('click', { message: time }, onClick); e.g { message: time } and access it in the function using event(function parameter).data.message(key)   <div id="div1" style="border: 1px solid black; width: 100px; height: 100px">click me</div> <script type="text/javascript"> function onClick(event) { alert(event.data.message); } var time = "loaded at:" + new Date().toString(); $("div.#div1").bind('click', { message: time }, onClick); </script>

    Read the article

  • Cool examples of procedural pixel shader effects?

    - by Robert Fraser
    What are some good examples of procedural/screen-space pixel shader effects? No code necessary; just looking for inspiration. In particular, I'm looking for effects that are not dependent on geometry or the rest of the scene (would look okay rendered alone on a quad) and are not image processing (don't require a "base image", though they can incorporate textures). Multi-pass or single-pass is fine. Screenshots or videos would be ideal, but ideas work too. Here are a few examples of what I'm looking for (all from the RenderMonkey samples): PS - I'm aware of this question; I'm not asking for a source of actual shader implementations but instead for some inspirational ideas -- and the ones at the NVIDIA Shader Library mostly require a scene or are image processing effects. EDIT: this is an open-ended question and I wish there was a good way to split the bounty. I'll award the rep to the best answer on the last day.

    Read the article

  • Where to find Information on Software/Technologies Supported by PSRM

    - by Paula Speranza-Hadley
    People often ask where they can find informatoin about software and technologies supported by PSRM and what versions are supported.  This information can be found in the following locations: For X Path - See Script Engine Version dropdown in script Display/maintenance portal  in PSRM for three different kinds of scripts we support. Reference Document: http://docs.oracle.com/cd/E50182_01/PDF/PSRM_Quick_Install_Guide_v2_4_0_0_0.pdf For HTML/Java script -  As supported by supported browsers mentioned in Installation Guides. For information related to supported platforms(OS, Browsers, App servers and Database Servers) -  See Certified and supported Platforms section. Reference Document: http://docs.oracle.com/cd/E50182_01/PDF/PSRM_Installation_Guide_v2_4_0_0_0.pdf For Information related to Oracle client, Java, Micro Focus, Web servers  -  See Installation Checklist section. For Third Party products, copy right and licensing notices (like Apache FWs/libraries) - See License and Copyright Notices section (Appendix B).

    Read the article

  • Java EE and GlassFish Server Roadmap Update

    - by John Clingan
    2013 has been a stellar year for both the Java EE and GlassFish Server communities. On June 12, Oracle and its partners announced the release of Java EE 7, which delivers on three major themes – HTML5, developer productivity, and meeting enterprise demands. The online event attracted over 10,000 views in the first two days! During the online event, Oracle also announced the availability of GlassFish Server Open Source Edition 4, the world's first Java EE 7 compatible application server. The primary role of GlassFish Server Open Source Edition has been, and continues to be, driving adoption of the latest release of the Java Platform, Enterprise Edition. Oracle also announced the Java EE 7 SDK, which bundles GlassFish Server Open Source Edition 4, as a Java EE 7 learning aid. Last, Oracle publicly announced the Java EE 7 reference implementation based on GlassFish Server Open Source Edition 4. Java EE is a popular platform, as evidenced by the 20+ Java EE 6 compatible implementations available to choose from. After the launch of Java EE 7 and GlassFish Server Open Source Edition 4, we began planning the Java EE 8 roadmap, which was covered during the JavaOne Strategy Keynote. To summarize, there is a lot of interest in improving on HTML5 support, Cloud, and investigating NoSQL support. We received a lot of great feedback from the community and customers on what they would like to see in Java EE 8. As we approached JavaOne 2013, we started planning the GlassFish Server roadmap. What we announced at JavaOne was that GlassFish Server Open Source Edition 4.1 is scheduled for 2014. Here is an update to that roadmap. GlassFish Server Open Source Edition 4.1 is scheduled for 2014 We are planning updates as needed to GlassFish Server Open Source Edition, which is commercially unsupported As we head towards Java EE 8: The trunk will eventually transition to GlassFish Server Open Source Edition 5 as a Java EE 8 implementation The Java EE 8 Reference Implementation will be derived from GlassFish Server Open Source Edition 5. This replicates what has been done in past Java EE and GlassFish Server releases. Oracle will no longer release future major releases of Oracle GlassFish Server with commercial support – specifically Oracle GlassFish Server 4.x with commercial Java EE 7 support will not be released. Commercial Java EE 7 support will be provided from WebLogic Server. Oracle GlassFish Server will not be releasing a 4.x commercial version Expanding on that last bullet, new and existing Oracle GlassFish Server 2.1.x and 3.1.x commercial customers will continue to be supported according to the Oracle Lifetime Support Policy. Oracle recommends that existing commercial Oracle GlassFish Server customers begin planning to move to Oracle WebLogic Server, which is a natural technical and license migration path forward: Applications developed to Java EE standards can be deployed to both GlassFish Server and Oracle WebLogic Server GlassFish Server and Oracle WebLogic Server have implementation-specific deployment descriptor interoperability (here and here). GlassFish Server 3.x and Oracle WebLogic Server share quite a bit of code, so there are quite a bit of configuration and (extended) feature similarities. Shared code includes JPA, JAX-RS, WebSockets (pre JSR 356 in both cases), CDI, Bean Validation, JAX-WS, JAXB, and WS-AT. Both Oracle GlassFish Server 3.x and Oracle WebLogic Server 12c support Oracle Access Manager, Oracle Coherence, Oracle Directory Server, Oracle Virtual Directory, Oracle Database, Oracle Enterprise Manager and are entitled to support for the underlying Oracle JDK. To summarize, Oracle is committed to the future of Java EE.  Java EE 7 has been released and planning for Java EE 8 has begun. GlassFish Server Open Source Edition continues to be the strategic foundation for Java EE reference implementation going forward. And for developers, updates will be delivered as needed to continue to deliver a great developer experience for GlassFish Server Open Source Edition. We are planning for GlassFish Server Open Source Edition 5 as the foundation for the Java EE 8 reference implementation, as well as bundling GlassFish Server Open Source Edition 5 in a Java EE 8 SDK, which is the most popular distribution of GlassFish. This will allow GlassFish releases to be more focused on the Java EE platform and community-driven requirements. We continue to encourage community contributions, bug reports, participation on the GlassFish forum, etc. Going forward, Oracle WebLogic Server will be the single strategic commercially supported application server from Oracle. Disclaimer: The preceding is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract.It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle’s products remains at the sole discretion of Oracle.

    Read the article

  • WCF Operations and Multidimensional Arrays

    - by JoshReuben
    You cant pass MultiD arrays accross the wire using WCF - you need to pass jagged arrays. heres 2 extension methods that will allow you to convert prior to serialzation and convert back after deserialization:         public static T[,] ToMultiD<T>(this T[][] jArray)         {             int i = jArray.Count();             int j = jArray.Select(x => x.Count()).Aggregate(0, (current, c) => (current > c) ? current : c);                         var mArray = new T[i, j];             for (int ii = 0; ii < i; ii++)             {                 for (int jj = 0; jj < j; jj++)                 {                     mArray[ii, jj] = jArray[ii][jj];                 }             }             return mArray;         }         public static T[][] ToJagged<T>(this T[,] mArray)         {             var cols = mArray.GetLength(0);             var rows = mArray.GetLength(1);             var jArray = new T[cols][];             for (int i = 0; i < cols; i++)             {                 jArray[i] = new T[rows];                 for (int j = 0; j < rows; j++)                 {                     jArray[i][j] = mArray[i, j];                 }             }             return jArray;         } enjoy!

    Read the article

  • Summit reflections

    - by Rob Farley
    So far, my three PASS Summit experiences have been notably different to each other. My first, I wasn’t on the board and I gave two regular sessions and a Lightning Talk in which I told jokes. My second, I was a board advisor, and I delivered a precon, a spotlight and a Lightning Talk in which I sang. My third (last week), I was a full board director, and I didn’t present at all. Let’s not talk about next year. I’m not sure there are many options left. This year, I noticed that a lot more people recognised me and said hello. I guess that’s potentially because of the singing last year, but could also be because board elections can bring a fair bit of attention, and because of the effort I’ve put in through things like 24HOP... Yeah, ok. It’d be the singing. My approach was very different though. I was watching things through different eyes. I looked for the things that seemed to be working and the things that didn’t. I had staff there again, and was curious to know how their things were working out. I knew a lot more about what was going on behind the scenes to make various things happen, and although very little about the Summit was actually my responsibility (based on not having that portfolio), my perspective had moved considerably. Before the Summit started, Board Members had been given notebooks – an idea Tom (who heads up PASS’ marketing) had come up with after being inspired by seeing Bill walk around with a notebook. The plan was to take notes about feedback we got from people. It was a good thing, and the notebook forms a nice pair with the SQLBits one I got a couple of years ago when I last spoke there. I think one of the biggest impacts of this was that during the first keynote, Bill told everyone present about the notebooks. This set a tone of “we’re listening”, and a number of people were definitely keen to tell us things that would cause us to pull out our notebooks. PASSTV was a new thing this year. Justin, the host, featured on the couch and talked a lot of people about a lot of things, including me (he talked to me about a lot of things, I don’t think he talked to a lot people about me). Reaching people through online methods is something which interests me a lot – it has huge potential, and I love the idea of being able to broadcast to people who are unable to attend in person. I’m keen to see how this medium can be developed over time. People who know me will know that I’m a keen advocate of certification – I've been SQL certified since version 6.5, and have even been involved in creating exams. However, I don’t believe in studying for exams. I think training is worthwhile for learning new skills, but the goal should be on learning those skills, not on passing an exam. Exams should be for proving that the skills are there, not a goal in themselves. The PASS Summit is an excellent place to take exams though, and with an attitude of professional development throughout the event, why not? So I did. I wasn’t expecting to take one, but I was persuaded and took the MCM Knowledge Exam. I hadn’t even looked at the syllabus, but tried it anyway. I was very tired, and even fell asleep at one point during it. I’ll find out my result at some point in the future – the Prometric site just says “Tested” at the moment. As I said, it wasn’t something I was expecting to do, but it was good to have something unexpected during the week. Of course it was good to catch up with old friends and make new ones. I feel like every time I’m in the US I see things develop a bit more, with more and more people knowing who I am, who my staff are, and recognising the LobsterPot brand. I missed being a presenter, but I definitely enjoyed seeing many friends on the list of presenters. I won’t try to list them, because there are so many these days that people might feel sad if I don’t mention them. For those that I managed to see, I was pleased to see that the majority of them have lifted their presentation skills since I last saw them, and I happily told them as much. One person who I will mention was Paul White, who travelled from New Zealand to his first PASS Summit. He gave two sessions (a regular session and a half-day), packed large rooms of people, and had everyone buzzing with enthusiasm. I spoke to him after the event, and he told me that his expectations were blown away. Paul isn’t normally a fan of crowds, and the thought of 4000 people would have been scary. But he told me he had no idea that people would welcome him so well, be so friendly and so down to earth. He’s seen the significance of the SQL Server community, and says he’ll be back. It’ll be good to see him there. Will you be there too?

    Read the article

  • Encrypting your SQL Server Passwords in Powershell

    - by laerte
    A couple of months ago, a friend of mine who is now bewitched by the seemingly supernatural abilities of Powershell (+1 for the team) asked me what, initially, appeared to be a trivial question: "Laerte, I do not have the luxury of being able to work with my SQL servers through Windows Authentication, and I need a way to automatically pass my username and password. How would you suggest I do this?" Given that I knew he, like me, was using the SQLPSX modules (an open source project created by Chad Miller; a fantastic library of reusable functions and PowerShell scripts), I merrily replied, "Simply pass the Username and Password in SQLPSX functions". He rather pointed responded: "My friend, I might as well pass: Username-'Me'-password 'NowEverybodyKnowsMyPassword'" As I do have the pleasure of working with Windows Authentication, I had not really thought this situation though yet (and thank goodness I only revealed my temporary ignorance to a friend, and the embarrassment was minimized). After discussing this puzzle with Chad Miller, he showed me some code for saving passwords on SQL Server Tables, which he had demo'd in his Powershell ETL session at Tampa SQL Saturday (and you can download the scripts from here). The solution seemed to be pretty much ready to go, so I showed it to my Authentication-impoverished friend, only to discover that we were only half-way there: "That's almost what I want, but the details need to be stored in my local txt file, together with the names of the servers that I'll actually use the Powershell scripts on. Something like: Server1,UserName,Password Server2,UserName,Password" I thought about it for just a few milliseconds (Ha! Of course I'm not telling you how long it actually took me, I have to do my own marketing, after all) and the solution was finally ready. First , we have to download Library-StringCripto (with many thanks to Steven Hystad), which is composed of two functions: One for encryption and other for decryption, both of which are used to manage the password. If you want to know more about the library, you can see more details in the help functions. Next, we have to create a txt file with your encrypted passwords:$ServerName = "Server1" $UserName = "Login1" $Password = "Senha1" $PasswordToEncrypt = "YourPassword" $UserNameEncrypt = Write-EncryptedString -inputstring $UserName -Password $PasswordToEncrypt $PasswordEncrypt = Write-EncryptedString -inputstring $Password -Password $PasswordToEncrypt "$($Servername),$($UserNameEncrypt),$($PasswordEncrypt)" | Out-File c:\temp\ServersSecurePassword.txt -Append $ServerName = "Server2" $UserName = "Login2" $Password = "senha2" $PasswordToEncrypt = "YourPassword" $UserNameEncrypt = Write-EncryptedString -inputstring $UserName -Password $PasswordToEncrypt $PasswordEncrypt = Write-EncryptedString -inputstring $Password -Password $PasswordToEncrypt "$($Servername),$($UserNameEncrypt),$($PasswordEncrypt)" | Out-File c:\temp\ ServersSecurePassword.txt -Append .And in the c:\temp\ServersSecurePassword.txt file which we've just created, you will find your Username and Password, all neatly encrypted. Let's take a look at what the txt looks like: .and in case you're wondering, Server names, Usernames and Passwords are all separated by commas. Decryption is actually much more simple:Read-EncryptedString -InputString $EncryptString -password "YourPassword" (Just remember that the Password you're trying to decrypt must be exactly the same as the encrypted phrase.) Finally, just to show you how smooth this solution is, let's say I want to use the Invoke-DBMaint function from SQLPSX to perform a checkdb on a system database: it's just a case of split, decrypt and be happy!Get-Content c:\temp\ServerSecurePassword.txt | foreach { [array] $Split = ($_).split(",") Invoke-DBMaint -server $($Split[0]) -UserName (Read-EncryptedString -InputString $Split[1] -password "YourPassword" ) -Password (Read-EncryptedString -InputString $Split[2] -password "YourPassword" ) -Databases "SYSTEM" -Action "CHECK_DB" -ReportOn c:\Temp } This is why I love Powershell.

    Read the article

  • asus n550jv audio problem: no sound from notebook' speakers

    - by skywalker
    Ubuntu 13.10. The problem is: the internal speakers don't work. I have no problem when I'm using the headphones. There is no hardware issue since in windows 8 everything works perfectly(external subwoofer included). I'm trying to modify /etc/modprobe.d/alsa-base.conf but I can't find the correct model to put into: options snd-hda-intel model= The file HD-Audio-Models.txt doesn't contain the model for ALC668. Some info: :~sudo aplay -l **** List of PLAYBACK Hardware Devices **** card 0: MID [HDA Intel MID], device 3: HDMI 0 [HDMI 0] Subdevices: 1/1 Subdevice #0: subdevice #0 card 0: MID [HDA Intel MID], device 7: HDMI 1 [HDMI 1] Subdevices: 1/1 Subdevice #0: subdevice #0 card 0: MID [HDA Intel MID], device 8: HDMI 2 [HDMI 2] Subdevices: 1/1 Subdevice #0: subdevice #0 card 1: PCH [HDA Intel PCH], device 0: ALC668 Analog [ALC668 Analog] Subdevices: 0/1 Subdevice #0: subdevice #0 :~$ sudo lspci -v | grep -A7 -i "audio" 00:03.0 Audio device: Intel Corporation Xeon E3-1200 v3/4th Gen Core Processor HD Audio Controller (rev 06) Subsystem: Intel Corporation Device 2010 Flags: bus master, fast devsel, latency 0, IRQ 52 Memory at f7a14000 (64-bit, non-prefetchable) [size=16K] Capabilities: [50] Power Management version 2 Capabilities: [60] MSI: Enable+ Count=1/1 Maskable- 64bit- Capabilities: [70] Express Root Complex Integrated Endpoint, MSI 00 Kernel driver in use: snd_hda_intel -- 00:1b.0 Audio device: Intel Corporation 8 Series/C220 Series Chipset High Definition Audio Controller (rev 04) Subsystem: ASUSTeK Computer Inc. Device 11cd Flags: bus master, fast devsel, latency 0, IRQ 53 Memory at f7a10000 (64-bit, non-prefetchable) [size=16K] Capabilities: [50] Power Management version 2 Capabilities: [60] MSI: Enable+ Count=1/1 Maskable- 64bit+ Capabilities: [70] Express Root Complex Integrated Endpoint, MSI 00 Capabilities: [100] Virtual Channel PS info :~$ amixer -c 0 Simple mixer control 'IEC958',0 Capabilities: pswitch pswitch-joined Playback channels: Mono Mono: Playback [on] Simple mixer control 'IEC958',1 Capabilities: pswitch pswitch-joined Playback channels: Mono Mono: Playback [on] Simple mixer control 'IEC958',2 Capabilities: pswitch pswitch-joined Playback channels: Mono Mono: Playback [on] :~$ pacmd dump-volumes Welcome to PulseAudio! Use "help" for usage information. Sink 0: reference = 0: 76% 1: 76%, real = 0: 76% 1: 76%, soft = 0: 100% 1: 100%, current_hw = 0: 76% 1: 76%, save = yes Input 8: volume = 0: 100% 1: 100%, reference_ratio = 0: 100% 1: 100%, real_ratio = 0: 100% 1: 100%, soft = 0: 100% 1: 100%, volume_factor = 0: 100% 1: 100%, volume_factor_sink = 0: 100% 1: 100%, save = no Source 0: reference = 0: 100% 1: 100%, real = 0: 100% 1: 100%, soft = 0: 100% 1: 100%, current_hw = 0: 100% 1: 100%, save = no Source 1: reference = 0: 16% 1: 16%, real = 0: 16% 1: 16%, soft = 0: 100% 1: 100%, current_hw = 0: 16% 1: 16%, save = yes

    Read the article

  • Columnstore Case Study #2: Columnstore faster than SSAS Cube at DevCon Security

    - by aspiringgeek
    Preamble This is the second in a series of posts documenting big wins encountered using columnstore indexes in SQL Server 2012 & 2014.  Many of these can be found in my big deck along with details such as internals, best practices, caveats, etc.  The purpose of sharing the case studies in this context is to provide an easy-to-consume quick-reference alternative. See also Columnstore Case Study #1: MSIT SONAR Aggregations Why Columnstore? As stated previously, If we’re looking for a subset of columns from one or a few rows, given the right indexes, SQL Server can do a superlative job of providing an answer. If we’re asking a question which by design needs to hit lots of rows—DW, reporting, aggregations, grouping, scans, etc., SQL Server has never had a good mechanism—until columnstore. Columnstore indexes were introduced in SQL Server 2012. However, they're still largely unknown. Some adoption blockers existed; yet columnstore was nonetheless a game changer for many apps.  In SQL Server 2014, potential blockers have been largely removed & they're going to profoundly change the way we interact with our data.  The purpose of this series is to share the performance benefits of columnstore & documenting columnstore is a compelling reason to upgrade to SQL Server 2014. The Customer DevCon Security provides home & business security services & has been in business for 135 years. I met DevCon personnel while speaking to the Utah County SQL User Group on 20 February 2012. (Thanks to TJ Belt (b|@tjaybelt) & Ben Miller (b|@DBADuck) for the invitation which serendipitously coincided with the height of ski season.) The App: DevCon Security Reporting: Optimized & Ad Hoc Queries DevCon users interrogate a SQL Server 2012 Analysis Services cube via SSRS. In addition, the SQL Server 2012 relational back end is the target of ad hoc queries; this DW back end is refreshed nightly during a brief maintenance window via conventional table partition switching. SSRS, SSAS, & MDX Conventional relational structures were unable to provide adequate performance for user interaction for the SSRS reports. An SSAS solution was implemented requiring personnel to ramp up technically, including learning enough MDX to satisfy requirements. Ad Hoc Queries Even though the fact table is relatively small—only 22 million rows & 33GB—the table was a typical DW table in terms of its width: 137 columns, any of which could be the target of ad hoc interrogation. As is common in DW reporting scenarios such as this, it is often nearly to optimize for such queries using conventional indexing. DevCon DBAs & developers attended PASS 2012 & were introduced to the marvels of columnstore in a session presented by Klaus Aschenbrenner (b|@Aschenbrenner) The Details Classic vs. columnstore before-&-after metrics are impressive. Scenario Conventional Structures Columnstore ? SSRS via SSAS 10 - 12 seconds 1 second >10x Ad Hoc 5-7 minutes (300 - 420 seconds) 1 - 2 seconds >100x Here are two charts characterizing this data graphically.  The first is a linear representation of Report Duration (in seconds) for Conventional Structures vs. Columnstore Indexes.  As is so often the case when we chart such significant deltas, the linear scale doesn’t expose some the dramatically improved values corresponding to the columnstore metrics.  Just to make it fair here’s the same data represented logarithmically; yet even here the values corresponding to 1 –2 seconds aren’t visible.  The Wins Performance: Even prior to columnstore implementation, at 10 - 12 seconds canned report performance against the SSAS cube was tolerable. Yet the 1 second performance afterward is clearly better. As significant as that is, imagine the user experience re: ad hoc interrogation. The difference between several minutes vs. one or two seconds is a game changer, literally changing the way users interact with their data—no mental context switching, no wondering when the results will appear, no preoccupation with the spinning mind-numbing hurry-up-&-wait indicators.  As we’ve commonly found elsewhere, columnstore indexes here provided performance improvements of one, two, or more orders of magnitude. Simplified Infrastructure: Because in this case a nonclustered columnstore index on a conventional DW table was faster than an Analysis Services cube, the entire SSAS infrastructure was rendered superfluous & was retired. PASS Rocks: Once again, the value of attending PASS is proven out. The trip to Charlotte combined with eager & enquiring minds let directly to this success story. Find out more about the next PASS Summit here, hosted this year in Seattle on November 4 - 7, 2014. DevCon BI Team Lead Nathan Allan provided this unsolicited feedback: “What we found was pretty awesome. It has been a game changer for us in terms of the flexibility we can offer people that would like to get to the data in different ways.” Summary For DW, reports, & other BI workloads, columnstore often provides significant performance enhancements relative to conventional indexing.  I have documented here, the second in a series of reports on columnstore implementations, results from DevCon Security, a live customer production app for which performance increased by factors of from 10x to 100x for all report queries, including canned queries as well as reducing time for results for ad hoc queries from 5 - 7 minutes to 1 - 2 seconds. As a result of columnstore performance, the customer retired their SSAS infrastructure. I invite you to consider leveraging columnstore in your own environment. Let me know if you have any questions.

    Read the article

  • SQLAuthority News – Why I am Going to Attend #SQLPASS Summit 2012 – Seattle

    - by pinaldave
    I am going to Seattle I once again attend SQLPASS this year. This will be my fourth SQLPASS. Lots of people ask me why I am going to SQLPASS every year. Well there are so many different reasons for that. I go to SQLPASS because – I love it!  Here are few of the reasons I go to SQLPASS. Meet friends whom I have never met before Meet community at large – it is fun to hang around with like minded people Meet Rick Morelan – my book co-author and friend Attend various SQL Parties – there are so many parties around – see the list below Explore various new tools from various third party vendors Meet fellow Chapter Leaders and Regional Mentors And of course attend SQL Server Learning Sessions from industry known experts. The three-day event will be marked by a lot of learning, sharing, and networking, which will help me increase both my knowledge and contacts. PASS Summit provides me a golden opportunity to build my network as well as to identify and meet potential customers or employees. If I am a consultant or vendor who is looking for better career opportunities, PASS Summit is the perfect platform to meet and show my skills to my new potential customers and employers. Further, breakfasts, lunches, and evening receptions, which are included with registration, are meant to provide more and more networking opportunities. At PASS Summit, I gain not only new ideas but also inspire myself from top professionals and experts. Learning new things about SQL Server, interacting with different kinds of professionals, and sharing issues and solutions will definitely improve my understanding and turn me into a better SQL Server professional who can leverage and optimize SQL Server to improve business. I am going – are you joining? Note: This is re-blogged with modification from my 2 years old blog posts on a similar subject. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: About Me, Pinal Dave, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, SQLServer, T SQL, Technology

    Read the article

  • Should all foreign table references use foreign key constraints

    - by TecBrat
    Closely related to: Foreign key restrictions -> yes or no? I asked a question on SO and it led me to ask this here. If I'm faced with a choice of having a circular reference or just not enforcing the restraint, which is the better choice? In my particular case I have customers and addresses. I want an address to have a reference to a customer and I want each customer to have a default billing address id and a default shipping address id. I might query for all addresses that have a certain customer ID or I might query for the address with the ID that matches the default shipping or billing address ids. I'm not sure yet how the constraints (or lack of) will effect the system as my application and it's data age.

    Read the article

  • Do you have a plan for your digital assets after you die?

    - by pablo
    After reading this question I remembered of a news article about some websites that manage your online identity after you pass away. Have you planned what to do with your digital assets once you go? I'd imagine that your online footprint is as important as anything you leave of material value. I mean, what would be the difference of that open-source project that you created to the money and savings that you had? How would you like to have your identity managed after you pass away? Would you prefer to go "off the grid"? It's a sensitive topic and I never met anyone who prepared for it.

    Read the article

  • What to call objects that may delete cached data to meet memory constraints?

    - by Brent
    I'm developing some cross-platform software which is intended to run on mobile devices. Both iOS and Android provide low memory warnings. I plan to make a wrapper class that will free cached resources (like textures) when low memory warnings are issued (assuming the resource is not in use). If the resource returns to use, it'll re-cache it, etc... I'm trying to think of what this is called. In .Net, it's similar to a "weak reference" but that only really makes sense when dealing with garbage collection, and since I'm using c++ and shared_ptr, a weak reference already has a meaning which is distinct from the one I'm thinking of. There's also the difference that this class will be able to rebuild the cache when needed. What is this pattern/whatever is called? Edit: Feel free to recommend tags for this question.

    Read the article

< Previous Page | 147 148 149 150 151 152 153 154 155 156 157 158  | Next Page >