Search Results

Search found 5786 results on 232 pages for 'umbraco scripts'.

Page 75/232 | < Previous Page | 71 72 73 74 75 76 77 78 79 80 81 82  | Next Page >

  • How do I Integrate Production Database Hot Fixes into Shared Database Development model?

    - by TetonSig
    We are using SQL Source Control 3, SQL Compare, SQL Data Compare from RedGate, Mercurial repositories, TeamCity and a set of 4 environments including production. I am working on getting us to a dedicated environment per developer, but for at least the next 6 months we are stuck with a shared model. To summarize our current system, we have a DEV SQL server where developers first make changes/additions. They commit their changes through SQL Source Control to a local hgdev repository. When they execute an hg push to the main repository, TeamCity listens for that and then (among other things) pushes hgdev repository to hgrc. Another TeamCity process listens for that and does a pull from hgrc and deploys the latest to a QA SQL Server where regression and integration tests are run. When those are passed a push from hgrc to hgprod occurs. We do a compare of hgprod to our PREPROD SQL Server and generate deployment/rollback scripts for our production release. Separate from the above we have database Hot Fixes that will need to be applied in between releases. The process there is for our Operations team make changes on the PreProd database, and then after testing, to use SQL Source Control to commit their hot fix changes to hgprod from the PREPROD database, and then do a compare from hgprod to PRODUCTION, create deployment scripts and run them on PRODUCTION. If we were in a dedicated database per developer model, we could simply automatically push hgprod back to hgdev and merge in the hot fix change (through TeamCity monitoring for hgprod checkins) and then developers would pick it up and merge it to their local repository and database periodically. However, given that with a shared model the DEV database itself is the source of all changes, this won't work. Pushing hotfixes back to hgdev will show up in SQL Source Control as being different than DEV SQL Server and therefore we need to overwrite the reposistory with the "change" from the DEV SQL Server. My only workaround so far is to just have OPS assign a developer the hotfix ticket with a script attached and then we run their hotfixes against DEV ourselves to merge them back in. I'm not happy with that solution. Other than working faster to get to dedicated environment, are they other ways to keep this loop going automatically?

    Read the article

  • Using SQL Source Control and Vault Professional Part 4

    - by Ajarn Mark Caldwell
    Two weeks ago I upgraded our installation of Fortress to the latest version, which is now named Vault Professional.  This is the version of Vault (i.e. Vault Standard 5.1 / Vault Professional 5.1) that will be officially supported with Red-Gate SQL Source Control 2.1.  While the folks at Red-Gate did a fantastic job of working with me to get SQL Source Control to work with the older Fortress version, we weren’t going to just sit on that.  There are a couple of things that Vault Professional cleaned up for us, such as improved integration with Visual Studio 2010, so it was a win all around. Shortly after that upgrade, I received notice from Red-Gate that they had a new Early Access version of SQL Source Control available that included the ability to source control static data.  The idea here is that you probably have a few fairly static lookup tables in your system, and those data values are similar in concept to source code, and should be versioned in your source control management system also.  I agree with this, but please be wise…somebody out there is bound to try to use this feature as their disaster recovery for their entire database, and that is NOT the purpose.  First off, you should never have your PROD (or LIVE, whatever you call it) system attached to source control.  Source Control is for development, not for PROD systems.  Second, use the features that are intended for this purpose, such as BACKUP and RESTORE. Laying that tangent aside, it is great that now you can include these critical values in your repository and make them part of a deployment process.  As you would guess, SQL Source Control uses SQL Data Compare to create the data change scripts just like it uses SQL Compare to create the schema change scripts.  Once again, they did a very good job with the integration to their other products.  At this point we are really starting to see some good payback on our investment in the full SQL Developer Bundle.  Those products were worth the investment back when we only used them sporadically for troubleshooting and DBA analysis, but now with SQL Source Control, they are becoming everyday-use products for the development team. I like this software (SQL Source Control) so much that I am about to break my own rules and distribute it to my team to use even though it is still in beta.  This is the first time that I have approved the use of any beta software in a production scenario (actively building our next versions of internal software) but I predict that the usability and productivity gain of using SQL Source Control over manual scripting is worth the risk.  Of course, I have also put this beta software through its paces pretty well to be comfortable with it, and Red-Gate has proven their responsiveness to issues that came up in my early beta testing, and so I am willing to bet on their continued support.  Likewise, SourceGear, the maker of Vault Professional, has proven itself to me as well, and so the combination of SQL Source Control with Vault Professional is the new standard for my development team.

    Read the article

  • Scrubbing a DotNetNuke Database for user info and passwords

    - by Chris Hammond
    If you’ve ever needed to send a backup of your DotNetNuke database to a developer for testing, you likely trust the developer enough to do so without scrubbing your data, but just to be safe it is probably best that you do take the time to scrub. Before you do anything with the SQL below, make sure you have a backup of your website! I would recommend you do the following. Backup your existing production database Restore a backup of your production database as a NEW database Run the scripts below...(read more)

    Read the article

  • Stuck with A* implementation

    - by Syed
    I have implemented some A* code in C# using this JavaScript code. My C# implementation is the same as the above javascript code. But I'm unable to get it to work properly, e.g pathfinder blocks itself when the same number of walls are placed in front of it and some other scenarios as well like blocking it one way completely, I am assuming that code is standalone (not having other functionality included in other scripts). Can anyone tell me if the above code is missing any A star functionality?

    Read the article

  • Integrating Google Apps with Salesforce using Google Apps Script

    Integrating Google Apps with Salesforce using Google Apps Script A very special Google Developers Live episode, in which Arun Nagarajan talks about using Google Apps Script with Salesforce to show how easily developers can integrate Salesforce with Google Sheets, Gmail, Google Docs and other Google Apps products. Download the full source code of these demo scripts here: github.com From: GoogleDevelopers Views: 52 6 ratings Time: 41:23 More in Science & Technology

    Read the article

  • Automated build platform for .NET portfolio - best choice?

    - by jkohlhepp
    I am involved with maintaining a fairly large portfolio of .NET applications. Also in the portfolio are legacy applications built on top of other platforms - native C++, ECLIPS Forms, etc. I have a complex build framework on top of NAnt right now that manages the builds for all of these applications. The build framework uses NAnt to do a number of different things: Pull code out of Subversion, as well as create tags in Subversion Build the code, using MSBuild for .NET or other compilers for other platforms Peek inside AssemblyInfo files to increment version numbers Do deletes of certain files that shouldn't be included in builds / releases Releases code to deployment folders Zips code up for backup purposes Deploy Windows services; start and stop them Etc. Most of those things can be done with just NAnt by itself, but we did build a couple of extension tasks for NAnt to do some things that were specific to our environment. Also, most of those processes above are genericized and reused across a lot of our different application build scripts, so that we don't repeat logic. So it is not simple NAnt code, and not simple build scripts. There are dozens of NAnt files that come together to execute a build. Lately I've been dissatisfied with NAnt for a couple reasons: (1) it's syntax is just awful - programming languages on top of XML are really horrific to maintain, (2) the project seems to have died on the vine; there haven't been a ton of updates lately and it seems like no one is really at the helm. Trying to get it working with .NET 4 has cause some pain points due to this lack of activity. So, with all of that background out of the way, here's my question. Given some of the things that I want to accomplish based on that list above, and given that I am primarily in a .NET shop, but I also need to build non-.NET projects, is there an alternative to NAnt that I should consider switching to? Things on my radar include Powershell (with or without psake), MSBuild by itself, and rake. These all have pros and cons. For example, is MSBuild powerful enough? I remember using it years ago and it didn't seem to have as much power as NAnt. Do I really want to have my team learn Ruby just to do builds using rake? Is psake really mature enough of a project to pin my portfolio to? Is Powershell "too close to the metal" and I'll end up having to write my own build library akin to psake to use it on its own? Are there other tools that I should consider? If you were involved with maintaining a .NET portfolio of significant complexity, what build tool would you be looking at? What does your team currently use?

    Read the article

  • S#arp Architecture 1.5.1 released

    - by AlecWhittington
    So far we have had some great success with the 1.5 release of S#arp Architecture, but there were a few issues that made it into the release that needed to be corrected. These issues were: Unnecessary assemblies in the root /bin and SolutionItemsContainer folders Nant folder removed from root /bin - this was causing issues with the build scripts that come with the project if the user did not have Nant installed and available via a path variable VS 2010 template - the CrudScaffoldingForEnterpriseApp...(read more)

    Read the article

  • Getting Internet Explorer to Open Different Sets of Tabs Based on the Day of the Week

    - by Akemi Iwaya
    If you have to use Internet Explorer for work and need to open a different set of work-specific tabs every day, is there a quick and easy way to do it instead of opening each one individually? Today’s Question & Answer session comes to us courtesy of SuperUser—a subdivision of Stack Exchange, a community-driven grouping of Q&A web sites. The Question SuperUser reader bobSmith1432 is looking for a quick and easy way to open different daily sets of tabs in Internet Explorer for his work: When I open Internet Explorer on different days of the week, I want different tabs to be opened automatically. I have to run different reports for work each day of the week and it takes a lot of time to open the 5-10 tabs I use to run the reports. It would be a lot faster if, when I open Internet Explorer, the tabs I needed would automatically load and be ready. Is there a way to open 5-10 different tabs in Internet Explorer depending on the day of the week? Example: Monday – 6 Accounting Pages Tuesday – 7 Billing Pages Wednesday – 5 HR Pages Thursday – 10 Schedule Pages Friday – 8 Work Summary/Order Pages Is there an easier way for Bob to get all those tabs to load and be ready to go each day instead of opening them individually every time? The Answer SuperUser contributor Julian Knight has a simple, non-script solution for us: Rather than trying the brute force method, how about a work around? Open up each set of tabs either in different windows, or one set at a time, and save all tabs to bookmark folders. Put the folders on the bookmark toolbar for ease of access. Each day, right-click on the appropriate folder and click on ‘Open in tab group’ to open all the tabs. You could put all the day folders into a top-level folder to save space if you want, but at the expense of an extra click to get to them. If you really must go further, you need to write a program or script to drive Internet Explorer. The easiest way is probably writing a PowerShell script. Special Note: There are various scripts shared on the discussion page as well, so the solution shown above is just one possibility out of many. If you love the idea of using scripts for a function like this, then make sure to browse on over to the discussion page to see the various ones SuperUser members have shared! Have something to add to the explanation? Sound off in the comments. Want to read more answers from other tech-savvy Stack Exchange users? Check out the full discussion thread here.

    Read the article

  • Achieving Zero Downtime Deployment

    - by MattW
    I am trying to achieve zero downtime deployments so I can deploy less during off hours and more during "slower" hours - or anytime, in theory. My current setup, somewhat simplified: Web Server A (.NET App) Web Server B (.NET App) Database Server (SQL Server) My current deployment process: "Stop" the sites on both Web Server A and B Upgrade the database schema for the version of the app being deployed Update Web Server A Update Web Server B Bring everything back online Current Problem This leads to a small amount of downtime each month - about 30 mins. I do this during off hours, so it isn't a huge problem - but it is something I'd like to get away from. Also - there is no way to really go 'back'. I don't generally make rollback DB scripts - only upgrade scripts. Leveraging The Load Balancer I'd love to be able to upgrade one Web Server at a time. Take Web Server A out of the load balancer, upgrade it, put it back online, then repeat for Web Server B. The problem is the database. Each version of my software will need to execute against a different version of the database - so I am sort of "stuck". Possible Solution A current solution I am considering is adopting the following rules: Never delete a database table. Never delete a database column. Never rename a database column. Never reorder a column. Every stored procedure must be versioned. Meaning - 'spFindAllThings' will become 'spFindAllThings_2' when it is edited. Then it becomes 'spFindAllThings_3' when edited again. Same rule applies to views. While, this seems a bit extreme - I think it solves the problem. Each version of the application will be hitting the DB in a non breaking way. The code expects certain results from the views/stored procedures - and this keeps that 'contract' valid. The problem is - it just seeps sloppy. I know I can clean up old stored procedures after the app is deployed for awhile, but it just feels dirty. Also - it depends on all of the developers following these rule, which will mostly happen, but I imagine someone will make a mistake. Finally - My Question Is this sloppy or hacky? Is anybody else doing it this way? How are other people solving this problem?

    Read the article

  • Database source control

    - by Bojan Skrchevski
    Should database files(scripts etc.) be on source control? If so, what is the best method to keep it and update it there? Is there even a need for database files to be on source control since we can put it on a development server where everyone can use it and make changes to it if needed. But, then we can't get it back if someone messes it up. What approach is best used for databases on source-control?

    Read the article

  • A few questions about integrating AudioKinetic Wwise and Unity

    - by SaldaVonSchwartz
    I'm new to Wwise and to using it with Unity, and though I have gotten the integration to work, I'm still dealing with some loose ends and have a few questions: (I'm on Unity 4.3 as of now but I think it shouldn't make any difference) The base path: The Wwise documentation implies you set this in the AkGlobalSoundEngineInitializer basePath public ivar, which is exposed to the editor. However, I found that this variable is not really used. Instead, the path is hardcoded to /Audio/GeneratedSoundBanks in AkBankPath. I had to modify both scripts to actually look in the path that I set in the editor property. What's the deal with this? Just sloppyness or am I missing something? Also about paths: since I'm on Mac, I'm using Unity natively under OS X and in tadem, the Wwise authoring tool via VMWare and I share the OS X Unity project folder so I can generate the soundbanks into the assets folder. However, the authoring tool (downloaded the latest one for Windows) doesn't automatically generate any "platform-specific" subfolders for my wwise files. That is, again, the Unity integration scripts assume the path to be /Audio/GeneratedSoundBanks/<my-platform>/ which in my case would be Mac (I set the authoring tool to generate for Mac). The documentation says wwise will automatically generate the platform-specific folders but it just dumps all the stuff in GeneratedSoundBanks. Am I missing some setting? cause right now I just manually create the /Mac folder. The C# methods AkSoundEngine.PostEvent and AkSoundEngine.LoadBank for instance, have a few overloads, including ones where I can refer to the soundbanks or events by their ID. However, if I try to use these, for instance: AkSoundEngine.LoadBank(, AkSoundEngine.AK_DEFAULT_POOL_ID) where the int I got from the .h header, I get Ak_Fail. If I use the overloads that reference the objects by string name then it works. What gives? Converting the ID header to C#: The integration comes with a C# script that seems to fork a process to call Python in turn to covert the C++ header into a C# script. This always fails unless I manually execute the Python script myself from outside Unity. Might be a permissions thing, but has anyone experienced this? The Profiler: I set up the Unity player to run in the background and am using the "Profile" version of the plugin. However, when I start the Unity OS X standalone app, the profiler in VMWare does not see it. This I'm thinking might just be that I'm trying to see a running instance of the sound engine inside an OS X binary from a Windows virtual machine. But I'm just wondering if anyone has gotten the Windows profiler to see an OS X Unity binary. Different versions of the integration plugin: It's not clear to me from the documentation whether I have to manually (or write a script to do it) remove the "Profile" version and install the "Release" version when I'm going to do a Release build or if I should install both version in Unity and it'll select the right one. Thanks!

    Read the article

  • UPK for Testing Webinar Recording Now Available!

    - by Karen Rihs
    For anyone who missed last week’s event, a recording of the UPK for Testing webinar is now available.  As an implementation and enablement tool, Oracle’s User Productivity Kit (UPK) provides value throughout the software lifecycle.  Application testing is one area where customers like Northern Illinois University (NIU) are finding huge value in UPK and are using it to validate their systems.  Hear Beth Renstrom, UPK Product Manager and Bettylynne Gregg, NIU ERP Coordinator, discuss how the Test It Mode, Test Scripts, and Test Cases of UPK can be used to facilitate applications testing.

    Read the article

  • Apache log analyzer which manages time spent to serve the request

    - by antispam
    I need to monitor performance in my web server (there's an application server in the back) and create reports for senior management. I've enabled %T/%D in my Apache logs and I would like to know if there's an Apache log analyzer or some other tool which parses these values and manages them showing charts or reports. I am looking mostly for an integrated solution and not in the line of awk+gnuplot scripts.

    Read the article

  • Is there a procedural graphical programming environment?

    - by Marc
    I am searching for a graphical programming environment for procedural programming in which you can integrate some or all of the common sources of calculation procedures, such as Excel sheets, MATLAB scripts or even .NET assemblies. I think of something like a flowchart configurator in which you define the procedures via drag& drop using flow-statements (if-else, loops, etc.). Do you know of any systems heading in this direction?

    Read the article

  • Error with missing Wallpaper after remove unity from sudo command

    - by Vray
    After I paste these scripts into terminal, my wallpaper became white. sudo ln -s /etc/xdg/xdg-une/autostart/maximus-autostart.desktop /etc/xdg/autostart/ sudo ln -s /etc/xdg/xdg-une/autostart/netbook-launcher.desktop /etc/xdg/autostart/ sudo ln -s /usr/share/gconf/une/default/20_une-gconf-default /usr/share/gconf/defaults/ sudo ln -s /usr/share/gconf/une/mandatory/20_une-gconf-mandatory /usr/share/gconf/defaults/ sudo update-gconf-defaults Help me, I dont know what [-s] for ? and how do i get my wallpaper back. Best Regards from Newbie

    Read the article

  • Presenting at SQLConnections!

    - by andyleonard
    Introduction This year I'm honored to present at SQLConnections in Orlando 27-30 Mar 2011! My topics are Database Design for Developers, Build Your First SSIS Package, and Introduction to Incremental Loads. Database Design for Developers This interactive session is for software developers tasked with database development. Attend and learn about patterns and anti-patterns of database development, one method for building re-executable Transact-SQL deployment scripts, a method for using SqlCmd to deploy...(read more)

    Read the article

  • Five things SSIS should drop

    - by jamiet
    There’s a current SQL Server meme going round entitled Five things SQL Server should drop and, whilst no-one tagged me to write anything, I couldn’t resist doing the same for SQL Server Integration Services. So, without further ado, here are five things that I think should be dropped from SSIS.Data source connectionsSeriously, does anyone use these? I know why they’re there. Someone sat in a meeting back in the early part of the last decade and said “Ooo, Reporting Services and Analysis Services have these things called Data Sources. If we used them in Integration Services then we’d have a really cool integration story.” Errr….no.Web Service TaskDitto. If you want to do anything useful against anything but the simplest of SOAP web services steer well clear of this peculiar SSIS additionActiveX Script TaskAnother task that I suspect has never seen the light of day in a SSIS package. It was billed as a way of running upgraded DTS2000 ActiveX scripts in SSIS – sounds good except for one thing. Anytime one of those scripts would try to talk to the DTS object model (which they all do – otherwise what’s the point) then they will error out. This one has always been a real head scratcher.Slow Changing Dimension wizardI suspect I may get some push back on this one but I’m mentioning it anyway. Some people like the SCD wizard; I am not one of those people! Everything that the SCD component does can easily be reproduced using other components and from a performance point of view its much more beneficial to use those alternatives.Multifile Connection ManagerImagining buying a house that came with a set of keys that didn’t open any of the doors. Sounds ridiculous right? How about a SSIS Connection Manager that doesn’t get used by any of the tasks or components. Ah, that’ll be the Multifile Connection Manager then!Comments are of course welcome. Diatribes are assumed :)@Jamiet Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • SQL SERVER – Script to Update a Specific Column in Entire Database

    - by Pinal Dave
    Last week, I have received a very interesting question and I find in email and I really liked the question as I had to play around with SQL Script for a while to come up with the answer he was looking for. Please read the question and I believe that all of us face this kind of situation. “Pinal, In our database we have recently introduced ModifiedDate column in all of the tables. Now onwards any update happens in the row, we are updating current date and time to that field. Now here is the issue, when we added that field we did not update it with a default value because we were not sure when we will go live with the system so we let it be NULL. Now modification to the application went live yesterday and we are now updating this field. Here is where I need your help. We need to update all the tables in our database where we have column created ModifiedDate and now want to update with current datetime. As our system is already live since yesterday there are several thousands of the rows which are already updated with real world value so we do not want to update those values. Essentially, in our entire database where ever there is a ModifiedDate column and if it is NULL we want to update that with current date time?  Do you have a script for it?” Honestly I did not have such a script. This is very specific required but I was able to come up with two different methods how he can use this method. Method 1 : Using INFORMATION_SCHEMA SELECT 'UPDATE ' + T.TABLE_SCHEMA + '.' + T.TABLE_NAME + ' SET ModifiedDate = GETDATE() WHERE ModifiedDate IS NULL;' FROM INFORMATION_SCHEMA.TABLES T INNER JOIN INFORMATION_SCHEMA.COLUMNS C ON T.TABLE_NAME = C.TABLE_NAME AND c.COLUMN_NAME ='ModifiedDate' WHERE T.TABLE_TYPE = 'BASE TABLE' ORDER BY T.TABLE_SCHEMA, T.TABLE_NAME; Method 2: Using DMV SELECT 'UPDATE ' + SCHEMA_NAME(t.schema_id) + '.' + t.name + ' SET ModifiedDate = GETDATE() WHERE ModifiedDate IS NULL;' FROM sys.tables AS t INNER JOIN sys.columns c ON t.OBJECT_ID = c.OBJECT_ID WHERE c.name ='ModifiedDate' ORDER BY SCHEMA_NAME(t.schema_id), t.name; Above scripts will create an UPDATE script which will do the task which is asked. We can pretty much the update script to any other SELECT statement and retrieve any other data as well. Click to Download Scripts Reference: Pinal Dave (http://blog.sqlauthority.com)  Filed under: PostADay, SQL, SQL Authority, SQL Joins, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Configure Jenkins and Tomcat using Puppet on Vagrant

    - by ex3v
    I'm playing with setting up my first Spring + jenkins + Tomcat CI dev environment. For now it's just a test/fun phase, but in the near future I'll be starting new project with my coworkers. That's the reason that I want development environment virtualized and exactly te same on every development machine, as well as on production server. I choosen to use Vagrant and to try to write puppet scripts that not only install everything, but also configure everything so each of us will have the same jenkins plugins, same jenkins and tomcat login and password, and literally after calling vagrant up we are ready to work. What I managed to do so far is installation of stuff needed and port forwarding. My vagrantfile looks like this (comments stripped): VAGRANTFILE_API_VERSION = "2" Vagrant.configure(VAGRANTFILE_API_VERSION) do |config| config.vm.box = "precise32" config.vm.box_url = "http://files.vagrantup.com/precise32.box" config.vm.network :forwarded_port, guest: 80, host: 8090 config.vm.network :forwarded_port, guest: 8080, host: 8091 config.vm.network :private_network, ip: "192.168.33.10" config.vm.provision :puppet do |puppet| puppet.manifests_path = "puppet/" puppet.manifest_file = "default.pp" puppet.options = ['--verbose'] end end And this is my puppet file: Exec { path => [ "/bin/", "/sbin/" , "/usr/bin/", "/usr/sbin/" ] } class system-update { exec { 'apt-get update': command => 'apt-get update', } $sysPackages = [ "build-essential" ] package { $sysPackages: ensure => "installed", require => Exec['apt-get update'], } } class tomcat { package { "tomcat": ensure => present, require => Class["system-update"], } service { "tomcat": ensure => "running", require => Package["tomcat"], } } class jenkins { package { "jenkins": ensure => present, require => Class["system-update"], } service { "jenkins": ensure => "running", require => Package["jenkins"], } } include system-update include tomcat include jenkins Now, when I hit vagrant provision and go to http://localhost:8091/ I can see jenkins running, so above script works good. Next step is configurating jenkins and tomcat by extending above puppet scripts. I'm pretty green when it comes to CI. After wandering around web I've found few tutorials about jenkins configuration (here's one of them). I really want to move configuration presented in this tutorial to puppet file, so when I spread my vagrantfile and puppet file between my coworkers, I will be sure that everyone has exactly te same setup. Unfortunately I'm also green about using puppet, I don't know how to do this. Any help will be apreciated.

    Read the article

  • Speaking at SQL Saturday 61 in Washington DC

    - by AllenMWhite
    The organizers of SQL Saturday #61 in DC (actually Reston, VA) created an Advanced DBA/Dev track for their event, which I think is cool. Both of the presentations I'll be doing there on Saturday are in that track. (In fact, they're the first two sessions of the day.) The first, Automate Policy-Based Management using PowerShell will walk through the basics of Policy-Based Management, and then show you how to build PowerShell scripts to create and evaluate your policies. The second, Gather SQL Server...(read more)

    Read the article

  • Tired of Downloading Your Entire Schema? How About a Partial Download?

    - by user702295
    Proceed to document 1448266.1, Demantra Non Invasive Partial Schema Export Utility Extracting Partial Schema Dumps for Analysis.  In this document you will find instructions and 2 SQL scripts.  Give it a try.  If you have a problem, feel free to post to this BLOG or the community located at: https://communities.oracle.com/portal/server.pt/community/demantra_solutions/231 Regards!   Jeff

    Read the article

  • Change ra0 to wlan0 in ubuntu 10.10

    - by spambas
    I am trying to find a way to change the name of my Ralink interface to the usual wlan0. The application I use is looking for this interface but since we changed our wifi adapter form Intel to ralink we miss the wlan0 because Ralink uses ra0 instead. On all the forums and blogs I read that you need to change that in /etc/sysconfig/network-scripts/ifcfg-ra0. Ubuntu unfortunately does not have a /etc/sysconfig folder.

    Read the article

  • My 24HOP session

    - by NeilHambly
    Hello So was very exicited to have my 24HOP session on SQL Sever 2012 Memory Unfortunately I had the Demo gods decided that I should be paid a few visits.. and caused me to drop my connections for 10-15 minutes (several reboots later) I have attached the PPT slide (PDF) for those who want it I had to rush through the demo's as I lost 10-15 minutes and will redo this session as a camtasia recording to give this session again in one contigous recording The demo scripts will also be made available...(read more)

    Read the article

  • Missing Index on CI_SEV_PROC table (CCB 2.3.1)

    - by ACShorten
    One of our customers reported performance issues with the severence processes in Oracle Utilities Customer Care And Billing V2.3.1 and after investigation a base index was found to be missing in the base package installation scripts. This index can be created manually to address any performance issues in this part of the system. For details of the index creation, refer to the KB Article on My Oracle Support KB 1431632.1 Index Missing On CI_SEV_PROC Table.

    Read the article

  • Can anyone explain to me what problem Core Data solves?

    - by Curtis Sumpter
    Core Data seems to add a needless layer of complexity. If you want to save data created natively by the user in an app why not just use an object and then write the data all to SQLite or back to a server using a RESTful script if necessary. Android doesn't have Core Data (though if it has something similar I haven't seen it.). What the heck is the point of buggy CD except useless needless overhead for people who can't write SQL or CGI scripts?

    Read the article

< Previous Page | 71 72 73 74 75 76 77 78 79 80 81 82  | Next Page >