Search Results

Search found 30724 results on 1229 pages for 'backup solution'.

Page 170/1229 | < Previous Page | 166 167 168 169 170 171 172 173 174 175 176 177  | Next Page >

  • no-www redirect not working / DNS A record

    - by HonzaB
    I'm far from an expert on this, but I'll try to be as clear as possible. I have an eshop solution leased from a company and it's hosted on their server. I can access it thru company.com/myshop and it also allows me to set up to 3 domains that they should recognize and redirect to my specific shop from. I registered a domain with a different company and am trying to "redirect" it to the eshop. By means of one of the following DNS entries (as they look in the admin GUI) * A 111.111.111.111 *.myshop.com A 111.111.111.111 myshop.com A 111.111.111.111 I've managed to make www.myshop.com redirect to the IP of company.com (111.111.111.111), which then goes on to do exactly what I expect it to do (ie. recognizes it comes from my domain, does some further redirects itself). However, I can't seem to make myshop.com (ie. without the www) redirect to that IP, too. The company that I registered the domain with provides a "URL redirect" service, but google would only register the redirect request and wouldn't follow it. That's why I hope for a DNS solution to this - my assumption being I've managed to miss adding a record to the DNS; if, however, the reason lies elsewhere, I'd be happy to hear about that too. If it's a search engine friendly solution (ie the www/no-www dilemma - avoidance of double content penalties), then that's even better; have no prefference either way (www/no-www), just need it to work. Any help is greatly appreciated, thanks

    Read the article

  • DON'T MISS: Live Webcast - Nimble SmartStack for Oracle with Cisco UCS (Nov 12)

    - by Zeynep Koch
    You are invited to the live webcast with Nimble Storage, Oracle and Cisco where we will talk about the new SmartStack solution from Nimble Storage that features Oracle Linux, Oracle VM and Cisco UCS products. In this webinar, you will learn how Nimble Storage SmartStack with Oracle and Cisco provides a converged infrastructure for Oracle Database environments with Oracle Linux and Oracle VM. SmartStack, built on best-of-breed components, delivers the performance and reliability needed for deploying Oracle on a single symmetric multiprocessing (SMP) server or Oracle Real Application Clusters (RAC) on multiple nodes.  When : Tuesday, November 12, 2013, 11:00 AM Pacific Time Panelists: Michele Resta, Director of Linux and Virtualization Alliances, Oracle John McAbel, Senior Product Manager, Cisco Ibby Rahmani, Solutions Marketing, Nimble Storage SmartStack™solutions provide pre-validated reference architectures that speed deployments and minimize risk.      The pre-validated converged infrastructure is based on an Oracle Validated Configuration that includes Oracle Database and Oracle Linux with the Unbreakable Enterprise Kernel.     The solution components include a Nimble Storage CS-Series array, two Cisco UCS B200 M3 blade servers, Oracle Linux 6 Update 4 with the Unbreakable Enterprise Kernel, and Oracle Database 11g Release 2 or Oracle Database 12c Release 1.     The Nimble Storage CS-Series is certified with Oracle VM 3.2 providing an even more flexible solution leveraging virtualization for functions such as test and development by delivering excellent random I/O performance in Oracle VM environments. Register today 

    Read the article

  • Troubleshooting VC++ DLL in VB.Net

    - by Jolyon
    I'm trying to make a solution in Visual Studio that consists of a VC++ DLL and a VB.Net application. To figure this out, I created a VC++ Class Library project, with the following code (I removed all the junk the wizard creates): mathfuncs.cpp: #include "MathFuncs.h" namespace MathFuncs { double MyMathFuncs::Add(double a, double b) { return a + b; } } mathfuncs.h: using namespace System; namespace MathFuncs { public ref class MyMathFuncs { public: static double Add(double a, double b); }; } This compiles quite happily. I can then add a VC++ console project to the solution, add a reference to the original project for this new project, and call it as follows: test.cpp: using namespace System; int main(array<System::String ^> ^args) { double a = 7.4; int b = 99; Console::WriteLine("a + b = {0}", MathFuncs::MyMathFuncs::Add(a, b)); return 0; } This works just fine, and will build to test.exe and mathsfuncs.dll. However, I want to use a VB.Net project to call the DLL. To do this, I add a VB.Net project to the solution, make it the startup project, and add a reference to the original project. Then, I attempt to use it as follows: MsgBox(MathFuncs.MyMathFuncs.Add(1, 2)) However, when I run this code, it gives me an error: "Could not load file or assembly 'MathFuncsAssembly, Version=0.0.0.0, Culture=neutral, PublicKeyToken=null' or one of its dependencies. An attempt was made to load a program with an incorrect format." Do I need to expose the method somehow? I'm using Visual Studio 2008 Professional.

    Read the article

  • Questioning pythonic type checking

    - by Pace
    I've seen countless times the following approach suggested for "taking in a collection of objects and doing X if X is a Y and ignoring the object otherwise" def quackAllDucks(ducks): for duck in ducks: try: duck.quack("QUACK") except AttributeError: #Not a duck, can't quack, don't worry about it pass The alternative implementation below always gets flak for the performance hit caused by type checking def quackAllDucks(ducks): for duck in ducks: if hasattr(duck,"quack"): duck.quack("QUACK") However, it seems to me that in 99% of scenarios you would want to use the second solution because of the following: If the user gets the parameters wrong then they will not be treated like a duck and there will be no indication. A lot of time will be wasted debugging why there is no quacking going on until the user finally realizes his silly mistake. The second solution would throw a stack trace as soon the user tried to quack. If the user has any bugs in their quack() method which cause an AttributeError then those bugs will be silently swallowed. Once again time will be wasted digging for the bug when the second solution would simply give a stack trace showing the immediate issue. In fact, it seems to me that the only time you would ever want to use the first method is when: The block of code in question is in an extremely performance critical section of your application. Following the principal of "avoid premature optimization" you would only realize this of course, after you had implemented the safer approach and found it to be a bottleneck. There are many types of quacking objects out there and you are only interested in quacking objects that quack with a very specific set of arguments (this seems to be a very rare case to me). Given this, why is it that so many people prefer the first approach over the second approach? What is it that I am missing? Also, I realize there are other solutions (such as using abcs) but these are the two solutions I seem to see most often for the basic case.

    Read the article

  • MySQL Policy-Based Auditing Webinar Recording Now Availabile

    - by Rob Young
    For those who missed the live event, the recording of the "How to Add Policy-Based Auditing to your MySQL Applications" webinar is now available.  You can view it here. This presentation builds on my earlier blog post on MySQL Enterprise Audit that was announced at MySQL Connect in late September.  The web presentation expands on the introductory blog and covers: The regulatory problem to be solved (internal audit, PCI, Sarbanes-Oxley, HIPAA, others) MySQL Audit solutions for both Community and Enterprise users: General Log - use the basic features of the MySQL server MySQL 5.5 open audit API - or use your time and talent to build your own solution MySQL Enterprise Audit - or use the out of the box, ready for production solution from MySQL Simple, step-by-step process for installing, enabling and configuring the MySQL Enterprise Audit plugin for use with existing apps New variables and options for tuning the MySQL Enterprise Audit plugin for your specific use case Best practices for securing and managing audit log files and archived images Roadmap for adding an integrated solution around MySQL Enterprise Audit for MySQL only and Oracle/MySQL shops You can learn all the technical details on MySQL Enterprise Audit in the MySQL docs and learn all about MySQL Enterprise Edition and Auditing here. As always, thanks for your support of MySQL!

    Read the article

  • How to improve relationships between consultants and staff programmers

    - by Catchops
    I have been a consultant for a small software consulting firm for quite some time now. Our normal business model is not staff augmentation, but such that we find clients who need assistance in building a solution of some kind and then send in a team who can build that solution, work with the existing IT staff, train all involved on supportting that solution, then move on to the next job. We, of course, are still around for any needed ongoing support. We have a great reputation in our area and have been very successful in implementing the solutions that we provide. However, I have noticed a common theme for most of our projects. When we get on-site, there is generally a "stressed" relationship between our team and many of the IT staff currently at the client. I understand completely that there may be some anxiety about our arrival and that defenses can come up when we are around. Many of the folks are understanding and easy to work with, but there are usually some who will not work well with us at all, and who can quickly become a project risk in many ways. We try to go in with open minds and good attitudes, and try NOT to be arrogent or condecending. We generally get deployed when there is a mess to clean up - but we understand that there were reasons decisions were made that got them in the bind they are...so we just try to determine the next step forward and move on. My question is this - I'd like to hear from the IT staff and programmers out there who have had consultants in - what are the things that consultants do that fire up negative feelings and attitudes? What can we do better to make the relationship better, not only in the beginning, but as the project moves forward?

    Read the article

  • Data Integration 12c Raising the Big Data Roof at Oracle OpenWorld

    - by Tanu Sood
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Times New Roman","serif"; mso-fareast-font-family:"MS Mincho";} Author: Dain Hansen, Director, Oracle It was an exciting OpenWorld 2013 for us in the Data Integration track. Our theme this year was all about ‘being future ready’ - previewing one of our biggest releases this year: Oracle Data Integration 12c. Just this week we followed up with this preview by announcing the general availability of 12c release for Oracle’s key data integration products: Oracle Data Integrator 12c and Oracle GoldenGate 12c. The new release delivers extreme performance, increase IT productivity, and simplify deployment, while helping IT organizations to keep pace with new data-oriented technology trends including cloud computing, big data analytics, real-time business intelligence. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Times New Roman","serif"; mso-fareast-font-family:"MS Mincho";} Mark Hurd's keynote on day one set the tone for the Data Integration sessions. Mark focused on big data analytics and the changing consumer expectations. Especially real-time insight is a key theme for Oracle overall and data integration products. In Mark Hurd's keynote we heard from key customers, such as Airbus and Thomson Reuters, how real-time analysis of operational data including machine data creates value, in some cases even saves lives. Thomas Kurian gave a deeper look into Oracle's big data and fast data solutions. In the initial lead Data Integration track session - Brad Adelberg, VP of Development, presented Oracle’s Data Integration 12c product strategy based on key trends from the initial OpenWorld keynotes. Brad talked about how Oracle's data integration products address the new data integration requirements that evolved with cloud computing, big data, and changing consumer expectations and how they set the key themes in our products’ road map. Brad explained why and how fast-time to value, high-performance and future-ready solutions is the top focus areas for product development. If you were not able to attend OpenWorld or this session I recommend reading the white paper: Five New Data Integration Requirements and How to Meet them with Oracle Data Integration, which provides an in-depth look into how Oracle addresses the new trends in the DI market. Following Brad’s session, Nick Wagner provided in depth review of Oracle GoldenGate’s latest features and roadmap. Nick discussed how Oracle GoldenGate’s tight integration with Oracle Database sets the product apart from the competition. We also heard that heterogeneity of the product is still a major focus for GoldenGate’s development and there will be more news on that front when there is a major release. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Times New Roman","serif"; mso-fareast-font-family:"MS Mincho";} After GoldenGate’s product strategy session, Denis Gray from the PM team presented Oracle Data Integrator’s product strategy session, talking about the latest and greatest on ODI. Another good session was delivered by long-time GoldenGate users, Comcast.  Jason Hurd and Amit Patel of Comcast talked about the various use cases they deploy Oracle GoldenGate throughout their enterprise, from database upgrades, feeding reporting systems, to active-active database synchronization.  The Comcast team shared many good tips on how to use GoldenGate for both zero downtime upgrades and active-active replication with conflict management requirement. One of our other important goals we had this year for the Data Integration track at OpenWorld was hearing from our customers. We ended day 1 on just that, with a wonderful award ceremony for Oracle Excellence Awards for Oracle Fusion Middleware Innovation. The ceremony was held in the Yerba Buena Center for the Arts. Congratulations to Royal Bank of Scotland and Yalumba Wine Company, the winners in the Data Integration category. You can find more information on the award and the winners in our previous blog post: 2013 Oracle Excellence Awards for Fusion Middleware Innovation… Selected for their innovation use of Oracle’s Data Integration products; the winners for the Data Integration Category are Royal Bank of Scotland and The Yalumba Wine Company. Congratulations!!! Royal Bank of Scotland’s Market and International Banking division provides clients across the globe with seamless trading and competitive pricing, underpinned by a deep knowledge of risk management across the full spectrum of financial products. They handle millions of transactions daily to keep the lifeblood of their clients’ businesses flowing – whether through payment management solutions or through bespoke trade finance solutions. Royal Bank of Scotland is leveraging Oracle GoldenGate and Oracle Data Integrator along with Oracle Business Intelligence Enterprise Edition and the Oracle Database for a variety of solutions. Mainly, Oracle GoldenGate and Oracle Data Integrator are used to feed their data warehouse – providing a real-time data integration solution that feeds transactional data to their analytics system in minutes to enable improved decision making with timely, accurate data for their business users. Oracle Data Integrator’s in-database transformation capabilities and its ability to integrate with Oracle GoldenGate for real-time data capture is the foundation of this implementation. This solution makes it such that changes happening in the analytics systems are available the same day they are deployed on the operational system with 100% data quality guaranteed. Additionally, the solution has helped to reduce their operational database size from 150GB to 10GB. Impressive! Now what if I told you this solution was built in 3 months and had a less than 6 month return on investment? That’s outstanding! The Yalumba Wine Company is situated in the Barossa Valley of Australia. It is the oldest family owned winery in Australia with a unique way of aging their wines in specially crafted 100 liter barrels. Did you know that “Yalumba” is Aboriginal for “all the land around”? The Yalumba Wine Company is growing rapidly, and was in need of introducing a more modern standard to the existing manufacturing processes to meet globalization demands, overall time-to-market, and better operational efficiency objectives of product development. The Yalumba Wine Company worked with a partner, Bristlecone to develop a unique solution whereby Oracle Data Integrator is leveraged to pull data from Salesforce.com and JD Edwards, in addition to their other pre-existing source systems, for consumption into their data warehouse. They have emphasized the overall ease of developing integration workflows with Oracle Data Integrator. The solution has brought better visibility for the business users, shorter data loading and transformation performance to their data warehouse with rapid incorporation of new data sources, and a solid future-proof foundation for their organization. Moving forward, they plan on leveraging more from Oracle’s Data Integration portfolio. Terrific! In addition to these two customers on Tuesday we featured many other important Oracle Data Integrator and Oracle GoldenGate customers. On Tuesday the GoldenGate panel included: Land O’Lakes, Smuckers, and Veolia Water. Besides giving us yummy nutrition and healthy water, these companies have another aspect in common. They all use GoldenGate to boost their ERP application. Please read the recap by Irem Radzik. On Wednesday, the ODI Panel included: Barry Ralston and Ryan Weber of Infinity Insurance, Paul Stracke of Paychex Inc., and Ian Wall of Vertex Pharmaceuticals for a session filled with interesting projects, use cases and approaches to leveraging Oracle Data Integrator. Please read the recap by Sandrine Riley for more. Thanks to everyone who joined with us and we hope to stay connected! To hear more about our Data Integration12c products join us in an upcoming webcast to learn more. Follow us www.twitter.com/ORCLGoldenGate or goto our website at www.oracle.com/goto/dataintegration

    Read the article

  • Closer Aero-snap in ubuntu

    - by Andrew Redd
    Before I get screamed at for duplicating a question. Ive read windows 7-like snap window maximize and vertical feature and http://www.omgubuntu.co.uk/2009/11/get-aero-snap-in-ubuntu/ There are two problems with this solution that I am trying to get around. It's not sensitive to dragging a window It's not intelligent for Twin-view monitors. The first problem is the more pressing. I have the compiz settings with wmctrl, but this is not sensitive to dragging windows if I have a window with the focus and place my mouse in the panel I get the window maximized, even though I'm not dragging the window. A good solution would be sensitive to the state of the mouse, clicked, right clicked, middle clicked. An ideal solution would be sensitive to dragging a window or not. Second is a minor annoyance to me at least. With the commands as they are listed are equivalent to maximizing the windows since I have a Twinview monitors setup. Is there any way to add these sensitivities to the commands?

    Read the article

  • Create Site Definition in SharePoint2010 Part1

    - by ybbest
    Creating Site definition in Visual Studio2010 is very easy; it has a built-in Site Definition Project template. Today I will show you how to create Site Definition using Visual Studio 2010.You can download the complete source code here. Create Site Definition Project Choose the Site URL and note that you can only choose Farm Solution the sandboxed solution is greyed out. After the Visual Studio did his magic, you can see the Project structure as follows, it contains default.aspx which is the default page for that site definition, onet.xml contains the actions for creating site using this site template(This is like the elements.xml in the feature) and webtemp_Ybbest.CustomSiteDef.xml register this Site definition in the SharePoint(This is similar to the Feature.xml in feature) Open webtemp_Ybbest.CustomSiteDef.xml and modify ID field as it has to be unique and you can optionally choose to modify other fields as well. Deploy your solution and you can find your site template when you creating your site in either Central admin or Site Creation in Site Actions Menu      Create your site collection using your new site template and then navigate to the site you will find the site created by using your site definition.

    Read the article

  • How to get tilemap transparency color working with TiledLib's Demo implementation?

    - by Adam LaBranche
    So the problem I'm having is that when using Nick Gravelyn's tiledlib pipeline for reading and drawing tmx maps in XNA, the transparency color I set in Tiled's editor will work in the editor, but when I draw it the color that's supposed to become transparent still draws. The closest things to a solution that I've found are - 1) Change my sprite batch's BlendState to NonPremultiplied (found this in a buried Tweet). 2) Get the pixels that are supposed to be transparent at some point then Set them all to transparent. Solution 1 didn't work for me, and solution 2 seems hacky and not a very good way to approach this particular problem, especially since it looks like the custom pipeline processor reads in the transparent color and sets it to the color key for transparency according to the code, just something is going wrong somewhere. At least that's what it looks like the code is doing. TileSetContent.cs if (imageNode.Attributes["trans"] != null) { string color = imageNode.Attributes["trans"].Value; string r = color.Substring(0, 2); string g = color.Substring(2, 2); string b = color.Substring(4, 2); this.ColorKey = new Color((byte)Convert.ToInt32(r, 16), (byte)Convert.ToInt32(g, 16), (byte)Convert.ToInt32(b, 16)); } ... TiledHelpers.cs // build the asset as an external reference OpaqueDataDictionary data = new OpaqueDataDictionary(); data.Add("GenerateMipMaps", false); data.Add("ResizetoPowerOfTwo", false); data.Add("TextureFormat", TextureProcessorOutputFormat.Color); data.Add("ColorKeyEnabled", tileSet.ColorKey.HasValue); data.Add("ColorKeyColor", tileSet.ColorKey.HasValue ? tileSet.ColorKey.Value : Microsoft.Xna.Framework.Color.Magenta); tileSet.Texture = context.BuildAsset<Texture2DContent, Texture2DContent>( new ExternalReference<Texture2DContent>(path), null, data, null, asset); ... I can share more code as well if it helps to understand my problem. Thank you.

    Read the article

  • Oracle Buys Compendium - Adds Leading Content Marketing Platform to Oracle Eloqua Marketing Cloud

    - by Richard Lefebvre
    News Facts Oracle today announced that it has acquired Compendium, a cloud-based content marketing provider that helps companies plan, produce and deliver engaging content across multiple channels throughout their customers’ lifecycle. Compendium’s data-driven approach aligns relevant content with customer data and profiles to help companies more effectively attract prospects, engage buyers, accelerate conversion of prospects to opportunities, increase adoption, and drive revenue growth. Compendium’s innovative solution complements Oracle’s industry leading Eloqua Marketing Cloud which is a part of Oracle’s comprehensive Customer Experience solution. The combination of Oracle Eloqua Marketing Cloud with Compendium is expected to enable modern marketers to align persona-based content to customers’ digital body language to increase “top-of-funnel” customer engagement, improve the quality of sales leads, realize the highest return on their marketing investment, and increase customer loyalty. More information on this announcement can be found at http://www.oracle.com/compendium. Supporting Quotes “As customers increasingly access information through online and mobile channels, the buying process is shifting from sales-driven to marketing-driven. Now, more than ever, marketers are challenged to deliver relevant and engaging content across multiple channels and throughout the customer lifecycle,” said Thomas Kurian, Executive Vice President, Oracle Development. “By adding Compendium’s content marketing platform to Oracle Eloqua Marketing Cloud, customers will be able to capture more prospects, improve the customer experience and drive top line revenue.” “Oracle Eloqua Marketing Cloud is uniquely positioned to capture a prospect’s digital body language to help companies know each buyer’s demographics, behaviors and influencers,” said Chris Baggott, Compendium CEO. “By combining this buyer profile with Compendium’s data-driven content marketing platform, marketers will be able to deliver the right content, to the right individual across the right channel at the right time. We are very excited to now be a part of the industry’s most complete marketing cloud solution, giving us a global stage to deliver innovative content marketing solutions.” Supporting Resources About Oracle and Compendium General Presentation Customer and Partner Letter FAQ

    Read the article

  • How to move complete SharePoint Server 2007 from one box to another

    - by DipeshBhanani
    It was time of my first onsite client assignment on SharePoint. Client had one server production environment. They wanted to upgrade the topology with completely new SharePoint Farm of three servers. So, the task was to move whole MOSS 2007 stuff to the new server environment without impacting data. The last three scary words “… without impacting data…” were actually putting pressure on my head. Moreover SSP was required to move because additional information has been added for users apart from AD import.   I thought I had to do only backup and restore. It appeared pretty easy at first thought. Just because of these damn scary words, I thought to check out on internet for guidance related to this scenario. I couldn’t get anything except general guidance of moving server on Microsoft TechNet site. I promised myself for starting blogs with this post if I would be successful in this task. Well, I took long time to write this but finally made it. I hope it will be useful to all guys looking for SharePoint server movement.   Before beginning restoration, make sure that, there is no difference in versions of SharePoint at source and destination server. Also check whether the state of SharePoint Installation at the time of backup and restore is same or not. (E.g. SharePoint related service packs and patches if any)   The main tasks of the server movement are as follow:   Backup all the databases Install and configure SharePoint on new environment Deploy all solution (WSP Files) globally to destination server- for installing features attached to the solutions Install all the custom features Deploy/Copy custom pages/files which are added to the “12Hive” folder later Restore SSP Restore My Site Restore other web application   Tasks 3 to 5 are for making sure that we have configured the environment well enough for the web application to be restored successfully. The main and complex task was restoring SSP. I have started restoring SSP through Central Admin. After a while, the restoration status was updated to “unsuccessful”. “Damn it, what went wrong?” I thought looking at the error detail down the page. I couldn’t remember the error message but I had corrected and restored it again.   Actually once you fail restoring SSP, until and unless you don’t clean all related stuff well, your restoration will be failed again and again. I wanted to find the actual reason. So cleaned, restored, cleaned, restored… I had tried almost 5-6 times and finally, I succeeded. I had realized how pleasant it is, to see the word “Successful” on the screen. Without wasting your much time to read, let me write all the detailed steps of restoring SSP:   Delete the SSP through following STSADM command. stsadm -o deletessp -title <SSP name> -deletedatabases -force e.g.: stsadm -o deletessp -title SharedServices1 -deletedatabases –force Check and delete the web application associated with SSP if it exists. Remove Link from Check and remove “Alternate Access Mapping” associated with SSP if it exists. Check and delete IIS site as well as application pool associated with SSP if it exists. Stop following services: ·         Office SharePoint Server Search ·         Windows SharePoint Services Search ·         Windows SharePoint Services Help Search Delete all the databases associated/related to SSP from SQL Server. Reset IIS. Start again following services: ·         Office SharePoint Server Search ·         Windows SharePoint Services Search ·         Windows SharePoint Services Help Search Restore the new SSP.   After the SSP restoration, all other stuffs had completed very smoothly without any more issues. I did few modifications to sites for change of server name and finally, the new environment was ready.

    Read the article

  • Copy New Files Only in .NET

    - by psheriff
    Recently I had a client that had a need to copy files from one folder to another. However, there was a process that was running that would dump new files into the original folder every minute or so. So, we needed to be able to copy over all the files one time, then also be able to go back a little later and grab just the new files. After looking into the System.IO namespace, none of the classes within here met my needs exactly. Of course I could build it out of the various File and Directory classes, but then I remembered back to my old DOS days (yes, I am that old!). The XCopy command in DOS (or the command prompt for you pure Windows people) is very powerful. One of the options you can pass to this command is to grab only newer files when copying from one folder to another. So instead of writing a ton of code I decided to simply call the XCopy command using the Process class in .NET. The command I needed to run at the command prompt looked like this: XCopy C:\Original\*.* D:\Backup\*.* /q /d /y What this command does is to copy all files from the Original folder on the C drive to the Backup folder on the D drive. The /q option says to do it quitely without repeating all the file names as it copies them. The /d option says to get any newer files it finds in the Original folder that are not in the Backup folder, or any files that have a newer date/time stamp. The /y option will automatically overwrite any existing files without prompting the user to press the "Y" key to overwrite the file. To translate this into code that we can call from our .NET programs, you can write the CopyFiles method presented below. C# using System.Diagnostics public void CopyFiles(string source, string destination){  ProcessStartInfo si = new ProcessStartInfo();  string args = @"{0}\*.* {1}\*.* /q /d /y";   args = string.Format(args, source, destination);   si.FileName = "xcopy";  si.Arguments = args;  Process.Start(si);} VB.NET Imports System.Diagnostics Public Sub CopyFiles(source As String, destination As String)  Dim si As New ProcessStartInfo()  Dim args As String = "{0}\*.* {1}\*.* /q /d /y"   args = String.Format(args, source, destination)   si.FileName = "xcopy"  si.Arguments = args  Process.Start(si)End Sub The CopyFiles method first creates a ProcessStartInfo object. This object is where you fill in name of the command you wish to run and also the arguments that you wish to pass to the command. I created a string with the arguments then filled in the source and destination folders using the string.Format() method. Finally you call the Start method of the Process class passing in the ProcessStartInfo object. That's all there is to calling any command in the operating system. Very simple, and much less code than it would have taken had I coded it using the various File and Directory classes. Good Luck with your Coding,Paul Sheriff ** SPECIAL OFFER FOR MY BLOG READERS **Visit http://www.pdsa.com/Event/Blog for a free video on Silverlight entitled Silverlight XAML for the Complete Novice - Part 1.  

    Read the article

  • Oracle is Proud Sponsor of Gartner Security and Risk Management Summit 2011

    - by Troy Kitch
    Oracle will have a very strong presence at this year’s Gartner Security and Risk Management Summit 2011 in Washington D.C., June 20-23. If you plan on being there, please be sure to stop by Oracle booth D and say “hi” to the Security Solution Experts. Please join us for the: Oracle Solution Provider Session Oracle Solution Showcase Receptions Oracle Face to Face Meetings We have some powerful database security demonstrations that we’re showing off. If you haven’t had an opportunity to check out the new Oracle Database Firewall, now’s your chance to learn why it’s the first line of defense in a database security defense in depth strategy. Additionally, Mark Morrison, director of intelligence community information assurance, and Pat Sack, VP of the Oracle national security group, will discuss U.S. government cross-domain secure information sharing. This case study session will explain how Oracle helped the U.S. government consolidate its mission-critical intelligence database infrastructure securely, and the underlying Oracle Database security solutions that can benefit any organization looking to increase business agility and drive down IT costs through database consolidation. Potomac Ballroom B Find out more about the event here. Twitter #GartnerSecurity to join the conversation.

    Read the article

  • Page_BlockSubmit - reset it to False, if there is a scenario when page doesn't postback on validation error

    - by Vipin
    Recently, I was facing a problem where if there was a validation error, and if I changed the state of checkbox it won't postback on first attempt. But when I uncheck and check again , it postbacks on second attempt...this is some quirky behaviour in .ASP.Net platform. The solution was to reset Page_BlockSubmit flag to false and it works fine. The following explanation is from http://lionsden.co.il/codeden/?p=137&cpage=1#comment-143   Submit button on the page is a member of vgMain, so automatically it will only run the validation on that group. A solution is needed that will run validation on multiple groups and block the postback if needed. Solution Include the following function on the page: function DoValidation() { //validate the primary group var validated = Page_ClientValidate('vgPrimary ');   //if it is valid if (validated) { //valid the main group validated = Page_ClientValidate('vgMain'); }   //remove the flag to block the submit if it was raised Page_BlockSubmit = false;   //return the results return validated; } Call the above function from the submit button’s OnClientClick event. <asp:Button runat="server" ID="btnSubmit" CausesValidation="true" ValidationGroup="vgMain" Text="Next" OnClick="btnSubmit_Click" OnClientClick="return DoValidation();" /> What is Page_BlockSubmit When the user clicks on a button causing a full post back, after running Page_ClientValidate ASP.NET runs another built in function ValidatorCommonOnSubmit. Within Page_ClientValidate, Page_BlockSubmit is set based on the validation. The postback is then blocked in ValidatorCommonOnSubmit if Page_BlockSubmit is true. No matter what, at the end of the function Page_BlockSubmit is always reset back to false. If a page does a partial postback without running any validation and Page_BlockSubmit has not been reset to false, the partial postback will be blocked. In essence the above function, RunValidation, acts similar to ValidatorCommonOnSubmit. It runs the validation and then returns false to block the postback if needed. Since the built in postback is never run, we need to reset Page_BlockSubmit manually before returning the validation result.

    Read the article

  • Missed OpenWorld 2011 or JavaOne? See the Key Announcements Today

    - by Dain C. Hansen
    Learn more about Oracle OpenWorld and JavaOne Key Announcements through our six On Demand Webcasts or Podcasts. Your time is precious and you can't make time to watch all keynotes and sessions on demand. Want to get a concise overview on the Oracle OpenWorld and JavaOne key announcements? Presented by Oracle experts in EMEA, these six webcasts will help you decide which keynotes, general or solution sessions on Oracle OpenWorld and JavaOne could be of more interest to you. Six informative, on-demand sessions are available as podcasts and webcasts, on Oracle Hardware and Software, each taking just 15-20 minutes. Be updated in an hour on Oracle OpenWorld on… Oracle Exadata and Exalogic Engineered Systems with Oracle Applications Oracle Exalytics Business Intelligence Machine, the industry's first in-memory hardware and software system Oracle Big Data Appliance, the end-to-end solution for Big Data Oracle Enterprise Manager 12c, the industry's first solution to combine management of the full Oracle stack with complete enterprise cloud lifecycle management Oracle Fusion Applications, a complete suite with 100+ modules Oracle Public Cloud with subscription-based, self-service access to Oracle Fusion Applications, Oracle Fusion Middleware and Oracle Database Watch the Six JavaOne Key Announcement Webcasts anywhere you can access the Internet and learn more about: Plans for advancing the Java Platform, Standard Edition (Java SE) and an update on Java SE 8 Plans announced for the evolution Java Platform, Micro Edition Availability of JavaFX 2.0 The NetBeans IDE Availability for Windows, Mac, Linux and Oracle Solaris Latest developments in the evolution of Java Platform, Enterprise Edition (Java EE). Oracle Java Cloud Service. Follow other informative, on-demand sessions on Oracle Hardware and Software on topics like Cloud, Exadata, Exalogic, Exalytics, Big Data Appliance, Enterprise Manager 12c, Hardware - SuperCluster, Server - and Storage, Oracle Fusion Applications Register now!

    Read the article

  • Which algorithms/data structures should I "recognize" and know by name?

    - by Earlz
    I'd like to consider myself a fairly experienced programmer. I've been programming for over 5 years now. My weak point though is terminology. I'm self-taught, so while I know how to program, I don't know some of the more formal aspects of computer science. So, what are practical algorithms/data structures that I could recognize and know by name? Note, I'm not asking for a book recommendation about implementing algorithms. I don't care about implementing them, I just want to be able to recognize when an algorithm/data structure would be a good solution to a problem. I'm asking more for a list of algorithms/data structures that I should "recognize". For instance, I know the solution to a problem like this: You manage a set of lockers labeled 0-999. People come to you to rent the locker and then come back to return the locker key. How would you build a piece of software to manage knowing which lockers are free and which are in used? The solution, would be a queue or stack. What I'm looking for are things like "in what situation should a B-Tree be used -- What search algorithm should be used here" etc. And maybe a quick introduction of how the more complex(but commonly used) data structures/algorithms work. I tried looking at Wikipedia's list of data structures and algorithms but I think that's a bit overkill. So I'm looking more for what are the essential things I should recognize?

    Read the article

  • Oracle Systems and Solutions at CloudExpo NY 2012

    - by ferhat
    Oracle's Larry Ellison and Mark Hurd just unveiled industy's broadest cloud strategy on June 6, with services based on industry standards, with 100+ enterprise applications live in the Cloud today!  The broadest strategy to support your journey along the cloud in any path chose, at any pace your business require and need. This is great assurance for your journey into the clouds as it is, at the same time, quite a temptation, don't you think? We will be at the Cloud Expo Conference to take place June 11-14 in New York. Oracle is Platinum Plus sponsor of 10th International  Cloud Computing Conference & Expo 2012 East. Oracle is also glad to offer complimentary VIP Gold Passes to the conference. We wish everyone a great and productive time with all  the fellow cloudsters.  We, the systems solutions group at Oracle, have prepared Oracle Optimized Solution for Enterprise Cloud Infrastructure to help you start your Infrastructure-as-a-Service with ease, confidence, speed, and savings.  In this solution we are now bringing together the power of Oracle Solaris and SPARC T4 servers. We will be at the Cloud Bootcamp on Wednesday June 13th discussing how this combination can maximize return on investment and help organizations manage costs for their existing infrastructures or for new enterprise cloud infrastructure design. We will also be at the Expo floor #511 throughout the Cloud Expo conference. Join us for the keynote, general session, and technical sessions with Oracle: Keynote Session: A Pragmatic Journey to the Cloud , Tuesday, June 12, 2012 General Session: Oracle Cloud - An Enterprise Cloud for Business-Critical Applications , Monday, June 11, 2012 Conference Session: Accelerate Enterprise Cloud Deployment and Gain Total Cloud Control, Monday, June 11, 2012 Conference Session: The Java EE 7 Platform: Developing for the Cloud, Monday, June 11, 2012 Conference Session: Integrating Big Data into Your Data Center: A Big Data Reference Architecture, Monday, June 11, 2012 Conference Session: Borderless Applications in the Cloud with Oracle VM and Oracle Virtual Assembly Builder, Tuesday, June 12, 2012 Conference Session: Building a Private, Public, or Hybrid Cloud? Simplify Your Cloud with Oracle’s Complete Cloud Solution,Tuesday, June 12, 2012 Cloud Boot Camp: Building Private IaaS with Oracle Solaris and SPARC, Wednesday, June 13, 2012

    Read the article

  • Adopting Technologies for the Sake of Technologies

    - by shiju
    Unlike other engineering industries, the software engineering industry is really lacking maturity. The lack of maturity can see in different aspects of entire software development life cycle. I think other engineering industries are well organised and structured with common, proven engineering practices. The software engineering industry is greatly a diverse industry with different operating systems, and variety of development platforms, programming languages, frameworks and tools. Now these days, people are going behind the hypes and intellectual thoughts without understanding their core business problems and adopting technologies and practices for the sake of technologies and practices and simply becoming a “poster child” of technologies and practices. Understanding the core business problem and providing best, solid solution with a platform neutral approach, will give you more business values and ROI, instead of blindly adopting technologies and tailor-made your applications for the sake of technologies and practices. People have been simply migrating their solutions in favour of new technologies and different versions of frameworks without any business need. The “Pepsi Challenge” in the Software Development  Pepsi Challenge marketing campaign of the 1980s was a popular and very interesting marketing promotion in which people taste one cup of Pepsi and another cup with Coca Cola. In the taste test, more than 50% of people were preferred Pepsi  over Coca Cola. The success story behind the Pepsi was more sweetness contains in the Pepsi cola. They have simply added more sugar and more people preferred more sweet flavour. You can’t simply identify the better one after sipping one cup of cola based on the sweetness which contains. These things have been happening in the software industry for choosing development frameworks and technologies. People have been simply choosing frameworks based on the initial sugary feeling without understanding its core strengths and weakness. The sugary framework might be more harmful when you develop real-world systems. There is not any silver bullet for solving all kind of problems and frameworks and tools do have strengths and weakness. So it would be better to understand their strength and weakness. And please keep in mind that you have to develop real apps to understand the real capabilities and weakness of a framework. Evaluating a technology based on few blog posts will harm your projects and these bloggers might be lacking real-world experience with the framework. The Problem with Align a Development Practice with Tools Recently I have observed a discussion in a group where one guy asked suggestions for practicing Continuous Delivery (CD) as part of the agile based application engineering. Then the discussion quickly went to using and choosing a Continuous Integration (CI) tool and different people suggested different Continuous Integration (CI) tools for simply practicing Continuous Delivery. If you have worked with core agile engineering practices, you could clearly know that the real essence of agile is neither choosing a tool nor choosing a process. By simply choosing CI tool from a particular vendor will not ensure that you are delivering an evolving software based on customer feedback. You have to understand the real essence of a engineering practice and choose a right tool for practicing it instead of simply focus on a particular tool for a practicing an development practice. If you want to adopt a practice, you need a solid understanding on it with its real essence where tools are just helping us for better automation. Adopting New Technologies for the Sake of Technologies The another problem is that developers have been a tendency to adopt new technologies and simply migrating their existing apps to new technologies. It is okay if your existing system is having problem  with a technology stack or or maintainability challenge with existing solution, and moving to new technology for solving the current problems. We have been adopting new technologies for solving new challenges like solving the scalability challenges when the application or user bases is growing unpredictably. Please keep in mind that all new technologies will become old after working with it for few years. The below Facebook status update of Janakiraman, expresses the attitude of a typical customer. For an example, Node.js is becoming a hottest buzzword in the software industry and many developers are trying to adopt Node.js for their apps. The important thing is that Node.js is a minimalist framework that does some great things for some problems, but it’s not a silver bullet. I have been also working with Node.js which is good for some problems, but really bad for choosing it for all kind of problems. By adopting new technologies for new projects is good if we could get real business values from it because newer framework would solve some existing well known problems and provide better solutions where it can incorporate good solutions for the latest challenges . But adopting a new technology for the sake of new technology is really bad idea. Another example is JavaScript is getting lot of attention so that lot of developers are developing heavy JavaScript centric web apps. First, they will adopt a client-side JavaScript MV* framework from AngularJS, Ember, Backbone etc, and develop a Single Page App(SPA) where they are repeating the mistakes we did in the past with server-side. The mistakes we did in the server-side is transforming to client-side. The problem is that people are just adopting new technologies, but not improving their solutions. I predict that many Single Page App will suck in the future. We need a hybrid approach where we should be able to leverage both server-side and client-side for developing next-generation web apps. The another problem is that if you like a particular framework, use it for all kind of apps. In the past, I know some Silverlight passionate guys were tried to use that framework for all kind of apps including larger line of business apps. And these days developers are migrating their existing Silverlight apps in favour of HTML5 buzzword. So the real question is, what is the business values we are getting from these apps when we are developing it for the sake of a particular technology instead of business need. The another problem is that our solutions consultants are trying to provide unnecessary solutions for the sake of a particular technology or for a hype. For an example, Big Data solutions are great for solving the problem of three Vs : volume, velocity and variety. But trying to put this for every application will make problems. Let’s say, there is a small web site running with limited budget and saying that we need a recommendation engine for the web site with a Hadoop based solution with a 16 node cluster, would be really horrible. If we really need a Hadoop based solution, got for it, but trying to put this for all application would be a big disaster. It would be great if could understand the core business problems first, and later choose a right framework for providing solutions for the actual business problem, instead of trying to provide so many solutions. The Problem with Tied Up to a Platform Vendor Some organizations and teams are tied up with a particular platform vendor where they don’t want to use any product other than their preferred or existing platform vendor. They will accept any product provided by the vendor regardless of its capability. This will lets you some benefits regards with integration and collaboration of different products provided by the same vendor, but it will loose your opportunity to provide better solution for your business problems. For a real world sample scenario, lot of companies have been using SAP for their ERP solutions. When they are thinking about mobility or thinking about developing hybrid mobile apps, they can easily find out a framework from SAP. SAP provides a framework for HTML 5 based UI development named SAPUI5. If you are simply adopting that framework only based for the preference of existing platform vendor, you might be loose different opportunities for providing better solution. Initially you might enjoy the sugary feeling provided by the platform vendor, but you have to think about developing apps which should be capable for solving future challenges. I am not saying that any framework is not good and I believe that all frameworks are good over another one for solving at least one problem. My point is that we should not tied up with any specific platform vendor unless your organization is having resource availability problems. Being Polyglot for Providing Right Solutions The modern software engineering industry is greatly diverse with different tools and platforms. Lot of open source frameworks and new programming languages have been releasing to the developer community, where choosing the right platform without any biased opinion, is really a difficult task. But it would really great if we could develop an attitude with platform neutral mindset and being a polyglot developer for providing better solutions based on the actual business problems. IMHO, we should learn a new programming language and a new framework every year. This will improve the quality of our developer capabilities and also improve the quality of our primary programming language skills. Being polyglot for individual developers and organizational teams will give you greater opportunity to your developer experience and also for your applications. Organizations can analyse their business problem without tied with any technology and later they can provide solutions by choosing different platform and tools. Summary    In this blog post, what I was trying to say that we should not tied up or biased with any development platform, technology, vendor or programming language and we should not adopt technologies and practices for the sake of technologies. If we are adopting a technology or a practice for the sake of it, we are simply becoming a “poster child” of the technology and practice. We should not become a poster child of other people’s intellectual thoughts and theories, instead of it we should become solutions developers and solutions consultants where we should be able to provide better solutions for the business problems. Being a polyglot developer is a good idea for improving your developer skills which lets you provide better solutions for the business problems. The most important thing is that we should become platform neutral developers where our passion should be for providing brilliant solutions. It would be great if we could provide minimalist, pragmatic business solutions. You can follow me on Twitter @shijucv

    Read the article

  • Managing Scripts in Oracle SQL Developer

    - by thatjeffsmith
    You backup your databases, right? You backup you home computer – your media collection, tax documents, bank accounts, etc, right? You backup your handy-dandy SQL scripts, right? Ok, now that I’ve got your head nodding, I want to answer a question I get every so often: How can I manage my scripts in SQL Developer? This is an interesting question. First, it assumes that one SHOULD manage their scripts in their IDE. Now, what I think the question generally gets around to is, how can we: Navigate to our scripts Open them Execute them What a good IDE should have is an interface to your existing Version Control System (VCS.) SQL Developer supports out-of-the-box both Subversion and Git. You can also download an extension via check-for-updates to get support for CVS. Now, what I’m about to show you COULD be done without versioning and controlling your scripts – but I want to ask you why you wouldn’t want to do this? So, I’m going to proceed and assume that you do INDEED version your scripts already. Seeing what scripts you’ve already got in your repository This is very straightforward – just open the Team Versions panel. Then connect to your repository. Shows you the files in your source control system. Now, I could ‘preview’ said file right away. If I open the file from here, we get a temp file copy down from the server to the local machine. This is a local temp copy of the controlled script – I can read/execute, but not write to it. And that might be all you need. But, if your script calls other scripts, then you’re going to want to check out the server copy of your stuff down your local SVN working copy directory. That way when your script calls another script – you’re executing the PRODUCTION APPROVED copies of said scripts. And if you do SPOOL or other file I/O stuff, it will work as expected. To get to those said client copies of your scripts… Enter the Files Panel The Files panel is accessible from the View menu. You can get to your files, one of two ways. If you’ve touched the file recently, you can see it under the Recent tree. Otherwise, you can navigate to your local ‘checked out’ copies of your script(s). Open your local copies, see what’s changed, etc. And I can access the change history and see what’s been touched… What changes am I going to ‘push out’ if I commit this back to the server? Most of us work on teams, yes? This panel also gives me a heads up if someone else is making changes to the same file. I can see the ‘incoming’ changes as well. To Sum It Up… If I want to get a script to run: do a full get to your local directory open the script(s) The files panel will tell you if your local copy is out of date from the server and if you have made local changes you’ve forgotten to commit back up to the server and your fellow teammates. Now, if you’re the selfish type and don’t want to share, that’s fine. But you should still be backing up your scripts, and you can still use the Files panel to manage your scripts.

    Read the article

  • Outlook 2007: Automatically mark a mail when printed

    - by Mestika
    Hi, I am looking for a solution for my Outlook 2007. I don’t know if it is possible, but what I need is, that each time I print a mail, it automatically get marked in some way, either by a flag or a category. I’ve browsed through different menus, settings and rules in outlook and scouted google for a solution but with no result. Don’t know if it makes a difference, but my outlook is connected to a Exchange server 2010. I would really appreciate if someone knew a solution to this. Sincere Mestika

    Read the article

  • Configuring LiveID authentication with SharePoint2010

    - by ybbest
    With the addition of the new claims based authentication framework in SharePoint 2010, SharePoint is now more loosely coupled to the authentication layer than ever. You’ve probably seen presentations or webinars where it was mentioned that you can use claims authentication against authentication providers such as Live ID and OpenID. In this blog I will show you the common problems while you configure you LiveID integration with SharePoint2010.The detailed configuration can be found in the following blogs. Part 1 – http://www.wictorwilen.se/Post/Visual-guide-to-Windows-Live-ID-authentication-with-SharePoint-2010-part-1.aspx Part 2 – http://www.wictorwilen.se/Post/Visual-guide-to-Windows-Live-ID-authentication-with-SharePoint-2010-part-2.aspx Part 3 – http://www.wictorwilen.se/Post/Visual-guide-to-Windows-Live-ID-authentication-with-SharePoint-2010-part-3.aspx Here are some problems I have following the instructions: Problem 1: If you had the following exceptions when you run the PowerShell scripts to create the new LiveID authentication provider New-SPTrustedIdentityTokenIssuer : Exception of type ‘System.ArgumentException’ was thrown.Parameter name: claimType At line:1 char:42 + $authp = New-SPTrustedIdentityTokenIssuer <<<< -Name “LiveID INT” -Description “LiveID INT” -Realm $realm -ImportTrustCertificate $certfile -ClaimsMappings $emailclaim,$upnclaim -SignInUrl “https://login.live-int.com/login.srf” -IdentifierClaim $emailclaim.InputClaimType + CategoryInfo : InvalidData:(Microsoft.Share…dentityProvider:SPCmdletNewSPIdentityProvider) [New-SPTrustedIdentityTokenIssuer], ArgumentException + FullyQualifiedErrorId :Microsoft.SharePoint.PowerShell.SPCmdletNewSPIdentityProvider Solution: You need to Remove the existing the SPTrustedIdentityTokenIssuer.     1. You need to first get the existing TokenIssuer name by Get-SPTrustedIdentityTokenIssuer, and then run Remove- SPTrustedIdentityTokenIssuer to remove the existing TokenIssuer.     2. After that , you can re-run the script , everything should work fine now. Problem 2: Live INT automatically logs out Whenever I try to log in (https://login.live-int.com/login.srf), after entering valid email/password I get redirected to the logout page. Solution: You can find the solution in my previous blog.

    Read the article

  • Banco Espírito Santo Increases Sales Campaign Success Rate with Siebel CRM

    - by Tony Berk
    Banco Espírito Santo (BES), founded in 1869, is the second-largest private financial institution in Portugal with a 20.3% domestic market share, 2.1 million customers, and more than 700 in-country branches. It also has a strong international presence with operations in 23 countries and four continents. With strong growth in its major markets, BES needed a modern, cost-effective, scalable, and reliable customer relationship management (CRM) solution for its retail operations. The bank wanted to optimize client relationship management and integrate all customer touch points and service channels to improve the success of its sales and marketing initiatives. BES implemented the same CRM solution as many other leading banks: Oracle's Siebel CRM. With Siebel CRM 8.1 and other Oracle solutions, BES significantly increased sales of its new financial products across all channels by up to 25%, and it expects to increase annual revenue by up US$4 million annually. It also improved the success rate of bank branch sales, marketing, and lead generation campaigns by nearly 10%. “We are very happy with Oracle’s Siebel CRM applications. We already knew that this was the best solution available, but it has surpassed our best expectations,” said João Manaças, Customer Relationship Management Manager, Personal Marketing Department, Banco Espírito Santo. Click here to learn more about BES's use of Siebel CRM.

    Read the article

  • Ralink RT3060 wireless device configuration on ubuntu 12.04

    - by Stephan
    concerning How do I get a Ralink RT3060 wireless card working? I'm running Ubuntu 12.04 with a 'LWPX07 Edimax EW-7711In 150M 1T1R WL PCI Card' which has a rt3060 chip. Out of the box the card is recognized as rt2800sta. I tried solution one, that didn't work. Still the card connects to the wireless network, but it seems to slow to load any pages. Then I tried solution 2, but then the network-manager doesn't see any wireless device. $ iwconfig lo no wireless extensions. ra0 Ralink STA Link Quality:0 Signal level:0 Noise level:0 Rx invalid nwid:0 invalid crypt:0 invalid misc:0 eth0 no wireless extensions. $ lsmod Module Size Used by rt3562sta 882296 0 $ lspci -v 05:02.0 Network controller: Ralink corp. RT3060 Wireless 802.11n 1T/1R Subsystem: Edimax Computer Co. Device 7711 Flags: bus master, slow devsel, latency 64, IRQ 23 Memory at ff9f0000 (32-bit, non-prefetchable) [size=64K] Capabilities: <access denied> Kernel driver in use: rt2860 Kernel modules: rt3562sta, rt2800pci Am I missing a configuration step? How do I tell the network card which driver to use? Thanks in advance Stephan I found the problem. As described in stevens blog http://steveswinsburg.wordpress.com/2011/03/12/how-to-install-a-d-link-dwa-525-wireless-network-card-in-ubuntu-10-04/ sudo su make && make install "You need to use ‘sudo su’ and not just ‘sudo’ so it creates the directories properly." That is the problem with the solution describe above.

    Read the article

  • Web Sites All Start When Debugging a Web Site - Visual Studio 2010

    - by Daniel Lackey
    I wanted to blog about this because it was an annoyance to me and I couldn't figure out why for quite some time. Have you ever tried debugging one web application in your solution but when you do, all other web sites in your solution build and then start up their respective Visual Studio Development Server? It's not a major problem, but it adds time to waiting for what you are actually trying to debug to start up. After digging through Visual Studio 2010 settings, I finally found the option to turn it off. It is called Always Start When Debugging and is located in the Properties pane for the web project (click on the project .proj file in Visual Studio IDE). This is set to True by default each time you create a new Web Application project. Setting this to false will solve your problems. You will need to set this to false for all web applications in your solution as shown below: In addition, you can set properties on which port the development server uses each time it debugs. This is helpful if you want the port to stay the same for testing purposes. In contrast, you can set it to use a dynamic port each time so if you have a co-worker that is debugging it on a different session on the same server, you won't run into any problems with using the same port. The machine won't allow you to debug two sessions on the same port. Pretty basic stuff but it seemed like a really quirky setting to me.

    Read the article

< Previous Page | 166 167 168 169 170 171 172 173 174 175 176 177  | Next Page >