Search Results

Search found 20857 results on 835 pages for 'technical support'.

Page 511/835 | < Previous Page | 507 508 509 510 511 512 513 514 515 516 517 518  | Next Page >

  • difference between compiled and installed via rpm (zypper)

    - by cherouvim
    In an openSUSE 11.1 I download, compile and install ImageMagick via: wget ftp://.../pub/graphics/ImageMagick/ImageMagick-6.7.7-0.zip unzip ImageMagick-6.7.7-0.zip cd ImageMagick-6.7.7-0 ./configure --prefix=/usr/local/ImageMagick make make install Everything works nicelly until I discover that JPG is not supported: identify -list format | grep -i jpg [nothing related to JPG returned] So I reconfigure and recompile using: ./configure --prefix=/usr/local/ImageMagick --with-jpeg=yes --with-jp2=yes make make install But that changes nothing. I end up uninstalling: make uninstall and installing via zypper: zypper install ImageMagick This installed version 6.4.3 and now it does support JPG: identify -list format | grep -i jpg JPG* JPEG rw- Joint Photographic Experts Group JFIF format Any idea on what is going on here? What is a possible reason that this capability of ImageMagick was not there when compiled from source but was there when installed from rpm? Note that I don't necessarily care a lot about ImageMagick (since it now works), but generally about his kind of behaviour, becase in one way or another I've seen this happen in other ocasions as well.

    Read the article

  • Make Google chrome with specific user profile as default browser

    - by Kaushik Gopal
    Is it possible to set Google chrome with a custom user profile as the default browser? When I set google chrome as the default browser, it picks the "default" user profile as against the custom one I have setup. I tried setting google chrome as default browser after opening it from that particular user profile, but it doesn't seem to have an effect. I googled around but could only find another poor soul like myself who asked a similar question here: http://www.google.com/support/forum/p/Chrome/thread?tid=69f0a6e776ceab1c&hl=en There weren't any responses to that question. Cheers.

    Read the article

  • Can Acer Aspire Revo (Atom 330) be used with two monitors simultaneously?

    - by LeeD
    I'm so attracted to Acer Revo for the price & the look. As long as I can work on two monitors simultaneously, I'll be happy. Not planning to do heavy video editing or gaming. Occasional movie streaming would be fine. Will mainly use it to do trading, lots of word processing, some photo editing, connecting with friends. Anyone has experience using Revo with 2 or more monitors? The spec says it has VGA and HDMI output but Acer sales person over the phone told me it can support one monitor only..??

    Read the article

  • Sharing information between nodes in Beowulf Cluster

    - by Alejandro Sazo
    I am setting up a beowulf cluster and I've been reading that it might be necessary to make the home directory of the cluster users shared between them (assuming this users are local to each machine). The other case is leave each user with its own home and the communication is up to the master node. Another idea that came up was to use an LDAP unique user logged on each machine in the cluster, that keeps the idea of the shared home between nodes (but is only one home of one user). Which approach is better for this kind of cluster? Edit: The cluster is running openmpi and it will support cuda and opencl

    Read the article

  • Manage SQL Server Connectivity through Windows Azure Virtual Machines Remote PowerShell

    - by SQLOS Team
    Manage SQL Server Connectivity through Windows Azure Virtual Machines Remote PowerShell Blog This blog post comes from Khalid Mouss, Senior Program Manager in Microsoft SQL Server. Overview The goal of this blog is to demonstrate how we can automate through PowerShell connecting multiple SQL Server deployments in Windows Azure Virtual Machines. We would configure TCP port that we would open (and close) though Windows firewall from a remote PowerShell session to the Virtual Machine (VM). This will demonstrate how to take the advantage of the remote PowerShell support in Windows Azure Virtual Machines to automate the steps required to connect SQL Server in the same cloud service and in different cloud services.  Scenario 1: VMs connected through the same Cloud Service 2 Virtual machines configured in the same cloud service. Both VMs running different SQL Server instances on them. Both VMs configured with remote PowerShell turned on to be able to run PS and other commands directly into them remotely in order to re-configure them to allow incoming SQL connections from a remote VM or on premise machine(s). Note: RDP (Remote Desktop Protocol) is kept configured in both VMs by default to be able to remote connect to them and check the connections to SQL instances for demo purposes only; but not actually required. Step 1 – Provision VMs and Configure Ports   Provision VM1; named DemoVM1 as follows (see examples screenshots below if using the portal):   Provision VM2 (DemoVM2) with PowerShell Remoting enabled and connected to DemoVM1 above (see examples screenshots below if using the portal): After provisioning of the 2 VMs above, here is the default port configurations for example: Step2 – Verify / Confirm the TCP port used by the database Engine By the default, the port will be configured to be 1433 – this can be changed to a different port number if desired.   1. RDP to each of the VMs created below – this will also ensure the VMs complete SysPrep(ing) and complete configuration 2. Go to SQL Server Configuration Manager -> SQL Server Network Configuration -> Protocols for <SQL instance> -> TCP/IP - > IP Addresses   3. Confirm the port number used by SQL Server Engine; in this case 1433 4. Update from Windows Authentication to Mixed mode   5.       Restart SQL Server service for the change to take effect 6.       Repeat steps 3., 4., and 5. For the second VM: DemoVM2 Step 3 – Remote Powershell to DemoVM1 Enter-PSSession -ComputerName condemo.cloudapp.net -Port 61503 -Credential <username> -UseSSL -SessionOption (New-PSSessionOption -SkipCACheck -SkipCNCheck) Your will then be prompted to enter the password. Step 4 – Open 1433 port in the Windows firewall netsh advfirewall firewall add rule name="DemoVM1Port" dir=in localport=1433 protocol=TCP action=allow Output: netsh advfirewall firewall show rule name=DemoVM1Port Rule Name:                            DemoVM1Port ---------------------------------------------------------------------- Enabled:                              Yes Direction:                            In Profiles:                             Domain,Private,Public Grouping:                             LocalIP:                              Any RemoteIP:                             Any Protocol:                             TCP LocalPort:                            1433 RemotePort:                           Any Edge traversal:                       No Action:                               Allow Ok. Step 5 – Now connect from DemoVM2 to DB instance in DemoVM1 Step 6 – Close port 1433 in the Windows firewall netsh advfirewall firewall delete rule name=DemoVM1Port Output: Deleted 1 rule(s). Ok. netsh advfirewall firewall show  rule name=DemoVM1Port No rules match the specified criteria.   Step 7 – Try to connect from DemoVM2 to DB Instance in DemoVM1  Because port 1433 has been closed (in step 6) in the Windows Firewall in VM1 machine, we can longer connect from VM3 remotely to VM1. Scenario 2: VMs provisioned in different Cloud Services 2 Virtual machines configured in different cloud services. Both VMs running different SQL Server instances on them. Both VMs configured with remote PowerShell turned on to be able to run PS and other commands directly into them remotely in order to re-configure them to allow incoming SQL connections from a remote VM or on on-premise machine(s). Note: RDP (Remote Desktop Protocol) is kept configured in both VMs by default to be able to remote connect to them and check the connections to SQL instances for demo purposes only; but not actually needed. Step 1 – Provision new VM3 Provision VM3; named DemoVM3 as follows (see examples screenshots below if using the portal): After provisioning is complete, here is the default port configurations: Step 2 – Add public port to VM1 connect to from VM3’s DB instance Since VM3 and VM1 are not connected in the same cloud service, we will need to specify the full DNS address while connecting between the machines which includes the public port. We shall add a public port 57000 in this case that is linked to private port 1433 which will be used later to connect to the DB instance. Step 3 – Remote Powershell to DemoVM1 Enter-PSSession -ComputerName condemo.cloudapp.net -Port 61503 -Credential <UserName> -UseSSL -SessionOption (New-PSSessionOption -SkipCACheck -SkipCNCheck) You will then be prompted to enter the password.   Step 4 – Open 1433 port in the Windows firewall netsh advfirewall firewall add rule name="DemoVM1Port" dir=in localport=1433 protocol=TCP action=allow Output: Ok. netsh advfirewall firewall show rule name=DemoVM1Port Rule Name:                            DemoVM1Port ---------------------------------------------------------------------- Enabled:                              Yes Direction:                            In Profiles:                             Domain,Private,Public Grouping:                             LocalIP:                              Any RemoteIP:                             Any Protocol:                             TCP LocalPort:                            1433 RemotePort:                           Any Edge traversal:                       No Action:                               Allow Ok.   Step 5 – Now connect from DemoVM3 to DB instance in DemoVM1 RDP into VM3, launch SSM and Connect to VM1’s DB instance as follows. You must specify the full server name using the DNS address and public port number configured above. Step 6 – Close port 1433 in the Windows firewall netsh advfirewall firewall delete rule name=DemoVM1Port   Output: Deleted 1 rule(s). Ok. netsh advfirewall firewall show  rule name=DemoVM1Port No rules match the specified criteria.  Step 7 – Try to connect from DemoVM2 to DB Instance in DemoVM1  Because port 1433 has been closed (in step 6) in the Windows Firewall in VM1 machine, we can no longer connect from VM3 remotely to VM1. Conclusion Through the new support for remote PowerShell in Windows Azure Virtual Machines, one can script and automate many Virtual Machine and SQL management tasks. In this blog, we have demonstrated, how to start a remote PowerShell session, re-configure Virtual Machine firewall to allow (or disallow) SQL Server connections. References SQL Server in Windows Azure Virtual Machines   Originally posted at http://blogs.msdn.com/b/sqlosteam/

    Read the article

  • Munq is for web, Unity is for Enterprise

    - by oazabir
    The Unity Application Block (Unity) is a lightweight extensible dependency injection container with support for constructor, property, and method call injection. It’s a great library for facilitating Inversion of Control and the recent version supports AOP as well. However, when it comes to performance, it’s CPU hungry. In fact it’s so CPU hungry that it makes it impossible to make it work at Internet Scale. I was investigating some CPU issue on a portal that gets around 3MM hits per day and I found unusually high CPU. Here’s why: I did some CPU profiling on my open source project Dropthings and found that the highest CPU is consumed by Unity’s Resolve<>(). There’s no funky use of Unity in the project. Straightforward Register<>() and Resolve<>(). But as you can see, Resolve<>() is consuming significantly high CPU even after the site is warm and has been running for a while. Then I tried Munq, which is a basic Dependency Injection Container. It has everything you will usually need in a regular project. It boasts to be the fastest DI out there. So, I converted all Unity code to Munq in Dropthings and did a CPU profile and Whala!   There’s no trace of any Munq calls anywhere. That proves Munq is a lot faster than Unity.

    Read the article

  • Speaker at the German Visual FoxPro Developer Conference 2003

    The following is an excerpt from the UniversalThread conference coverage of the German Visual FoxPro Developer Conference 2003 written by Hans-Otto Lochmann and Armin Neudert. Track: Visual FoxPro and Linux This track consists of 4 sessions presented on one day in one sequence. Originally the Linux portion of this track was to be presented by Whil Hentzen, the well-known publisher, book author and confer-ence speaker. Unfortunately some illness prevented him from joining this DevCon. Rainer got the bad news only on early Friday morning. It was definitely to late to find a replacement among the already invited speaker on such a short notice. So Rainer decided to take over these "three sessions in a row" by himself with "a little help from his friends". He hired a coach for him for the weekend and prepared slides and sessions by himself - the originally planed slides and session material were still in USA. Rainer survived barely an endless disaster of C0000005's due to various wrong configuration settings... At the presentation Jochen Kirstätter helped massively with technical details regarding Linux whereas Rainer did the slides and the presentation. Gerold Lübben then presented the MySQL part - as originally planned. This track concentrated on the how to run Visual FoxPro applications on Linux machines with the help of a Windows emulator like Wine. As more and more people use Linux machines in production (and not just for running servers), more and more invitations to bid for a development job includes the requirement to run the application in a Linux environment. If you would like to participate in such submissions, then you should get familiar with the open source operating system Linux and the open source Data Base system MySQL. [...] These sessions provided a broad, complete overview of where Linux fits into the current computing landscape from the perspective of a VFP developer, where VFP can be used with Linux, and a conceptual plan for how to approach the incorporation of Linux into your day-to-day work. In order for you to be able to work with a Linux back end, you're going to need to know something about how Linux works. The best way involves a two-step process: First, plunk down a Linux workstation on your desk next to your Windows machine and develop some experience with the new OS.Second, once you have a basic level of comfort with Linux, gained through your experience on a workstation, leverage that knowledge and learn to connect to a Linux server from your Windows machine. This track showed both of these processes: What you can expect when you set up your Linux work-station, how to set it up, how to connect to your Windows network, how to fit VFP into the mix, and even how you could use it to replace your Windows workstation in some cases. Also this track demonstrated how to connect to an existing Linux server, running MySQL or an another back end, and how to get your VFP apps talking to that back end data. This track also showed both of the positions you can take. Rainer disliked it wholeheartedly (the bad guy position in these talks) and Jochen loved it (the good guy and "typical Linux techie"-position we all love). These opposite position lasted for three sessions and both sides where shown with their Pros and Cons in live and lively discussions of the speakers (club banging was forbidden). Gerold Luebben showed how Visual Foxpro and MySQL can work together. MySQL is as one the most well known open SOURCE databases for nearly all platforms available. Particularly in eBusiness MySQL is well positioned and well known for its performance and its stability. Still we like Visual FoxPro more - for sure . [...]

    Read the article

  • Partner Webcast: Innovation in Products - October 1st, 2012 at 04:00 PM CET (03:00 PM GMT) Program

    - by Richard Lefebvre
    I am pleased to invite you to join the Innovations in Products – webcast. Innovations in Products will present Oracle Product's new functions and features including sales positioning. The key objectives of these webcasts are to inspire System Integrator's implementation personnel to conduct successful after sales in their Customer projects. Innovations in Products will be presented on the 1st Monday of each quarter after the billable day (4:00 to 5:00 PM CET). The webcast is intended for System Integrator's Implementation Certified Specialists but Innovations in Products is open for other system Integrator's personnel as well. At first, two Oracle representatives will discuss Oracle's contribution to partners. Then you will see product breakout session followed by Q&A with Oracle Experts. Each session will last for maximum 1 hour. A Q&A document covering all questions and answers will be made available after the webcast. What are the Benefits for partners? Find out how Innovations in Products helps you to improve your after sales Discover new functions and features so you can enrich your Customers's solution Learn more about Oracle products, especially sales positioning Hear crucial questions raised by colleague alike, learn from their interest Engage and present your questions to subject experts Be inspired of the richness of Oracle's product portfolio – for your and your customer's benefit. Note: Should you already be familiar with a specific Product, then choose another one. Doing so you would expand your knowledge of the overall product portfolio. Some presentations contain product demonstration, although these presentations are not intended to be extremely detailed technical presentations. Product breakout sessions available on October 1st:   Topics Speaker To Register Fusion HCM Social Capabilities, Enterprise social capabilities embedded in how you run your business Anca Dumitru, HCM Presales Consultant, EMEA Presales Center CLICK HERE Oracle Fusion Applications Security Concepts, Overview Alexandra Dan, Applications Technology Presales Consultant, EMEA Presales Centre CLICK HERE Fusion Financials Overview - Focus on Fusion Payables, Meeting the Payables' Challenges Elena Nita, Senior ERP Sales Consultant, EMEA Presales Center CLICK HERE Introduction to Oracle RightNow CX, Empowering companies to engage directly with their customers through great Social, Web, Chat and Contact Center experiences. Cais Champsi, Presales Consultant, EMEA Presales Centre CLICK HERE Oracle Endeca Information Discovery, Product Overview Emma Palii, BI Sales Consultant, EMEA Presales Center CLICK HERE To access previously presented 23 Applications Products presentations and 6 Public Sector Value Proposition presentations, please click here. You might want to bookmark the overall registration page Innovations in Products October 1st and the global event calendar page events.oracle.com. Delivery Format Innovations in Products – program is a series of FREE prerecorded Oracle product presentations followed by Q&A. It will be delivered over the Web. Participants have the opportunity to submit questions during the web cast via chat and subject matter experts will provide verbal answers live. Innovations in Products consists of several parallel prerecorded product breakout sessions, each lasting for max. 1 hour. At first, two Oracle representatives will discuss Oracle's contribution to Partners. Then you'll see the product breakout sessions followed by Q&A with Oracle Experts. A Q&A document covering all questions and answers will be made available after the webcast. You can also see Innovations in Products afterwards as its content will be available online for the next 6-12 months. The next Innovations in Products web casts will be presented as follows: October 1st 2012 January 14th 2013 April 8th 2013. Note: Depending on local network bandwidth please allow some seconds time the presentations to download. You might want to refresh your screen by pressing F5. Duration: Maximum 1 hour For further information please contact me Markku Rouhiainen. Best regards Markku Rouhiainen Director, Applications Partner Enablement EMEA

    Read the article

  • SQL Authority News – Download and Install Adventure Works 2014 Sample Databases

    - by Pinal Dave
    If you are using SQL Server there are good chances that you are familiar with AdventureWorks. AdventureWorks is a Sample Database shipped with SQL Server and it can be downloaded from CodePlex site. AdventureWorks have replaced Northwind and Pubs from the sample database in SQL Server 2005. The Microsoft team keeps updating the sample database as they release new versions. I use the AdventureWorks database for most of my example, as it is easy to use sample database which is accessible for most of the people out there. Every new version  of SQL Server should have its own Adventureworks database. The reason is that SQL Server comes up with new features with every version and most of the new features need a new dataset sample to demonstrate the capabilities of the features. This is the why every version of SQL Server has its own AdventureWorks database. SQL Server 2014 has many new features and to support that Microsoft has released new Advetureworks 2014 Sample Database. You can download Adventure Works 2014 Sample Databases from here. Here is a quick tutorial how one can install the AdventureWorks database on your server. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Practical Approaches to increasing Virtualization Density-Part 1

    - by Girish Venkat
    Happy New year everyone!. Let me kick start the year off by talking about Virtualization density.  What is it?The number of virtual servers that a physical server can support and it's increase from the prior physical infrastructure as a percentage. Why is it important?This is important because the density should be indicative of how well the server is getting consumed?So what is wrong ?Virtualization density fails to convey the "Real usage" of a server.  Most of the hypervisor based O/S Virtualization  evangelists take pride in the fact that they are now running a Virtual Server farm of X machines compared to a Physical server farm of Y (with Y less than X obviously). The real question is - has your utilization of the server really increased or not.  In an internal study that was conducted by one of the top financial institution - the utilization of servers only went up by 15% from 30 to 45. So, this really means that just by increasing virtualization density one will not be achieving the goal of using up the servers in their server farm better.  I will write about what the possible approaches are to increase virtualization density in the next entry. 

    Read the article

  • Oracle buys Secerno

    - by Paulo Folgado
    Adds Heterogeneous Database Firewall to Oracle's Industry-leading Database Security SolutionsRedwood Shores, CA - May 20, 2010News FactsOracle has agreed to acquire Secerno, a provider of database firewall solutions for Oracle and non-Oracle databases.Organizations require a comprehensive security solution which includes database firewall functionality to prevent sophisticated attacks from reaching databases.Secerno's solution adds a critical defensive layer of security around databases, which blocks unauthorized activity in real-time.Secerno's products are expected to augment Oracle's industry-leading portfolio of database security solutions, including Oracle Advanced Security, Oracle Database Vault and Oracle Audit Vault to further ensure data privacy, protect against threats, and enable regulatory compliance.The combination of Oracle and Secerno underscores Oracle's commitment to provide customers with the most comprehensive and advanced security offering that helps reduce the costs and complexity of securing their information throughout the enterprise.The transaction is expected to close before end of June 2010. Financial details of the transaction were not disclosed.Supporting Quote:"The Secerno acquisition is in direct response to increasing customer challenges around mitigating database security risk," said Andrew Mendelsohn, senior vice president, Oracle Database Server Technologies. "Secerno's database firewall product acts as a first line of defense against external threats and unauthorized internal access with a protective perimeter around Oracle and non-Oracle databases. Together, Oracle's complete set of database security solutions and Secerno's technology will provide customers with the ability to safeguard their critical business information.""As a provider of database firewall solutions that help customers safeguard their enterprise databases, Secerno is a natural addition to Oracle's industry-leading database security solutions," said Steve Hurn, CEO Secerno. "Secerno has been providing enterprises and their IT Security departments strong assurance that their databases are protected from attacks and breaches. We are excited to bring Secerno's domain expertise to Oracle, and ensure continuity and success for our current customers, partners and prospects."Support Resources:About Oracle and SecernoGeneral PresentationFAQCustomer LetterPartner Letter

    Read the article

  • How do I classify using GLCM and SVM Classifier in Matlab?

    - by Gomathi
    I'm on a project of liver tumor segmentation and classification. I used Region Growing and FCM for liver and tumor segmentation respectively. Then, I used Gray Level Co-occurence matrix for texture feature extraction. I have to use Support Vector Machine for Classification. But I don't know how to normalize the feature vectors. Can anyone tell how to program it in Matlab? To the GLCM program, I gave the tumor segmented image as input. Was I correct? If so, I think, then, my output will also be correct. My glcm coding, as far as I have tried is, I = imread('fzliver3.jpg'); GLCM = graycomatrix(I,'Offset',[2 0;0 2]); stats = graycoprops(GLCM,'all') t1= struct2array(stats) I2 = imread('fzliver4.jpg'); GLCM2 = graycomatrix(I2,'Offset',[2 0;0 2]); stats2 = graycoprops(GLCM2,'all') t2= struct2array(stats2) I3 = imread('fzliver5.jpg'); GLCM3 = graycomatrix(I3,'Offset',[2 0;0 2]); stats3 = graycoprops(GLCM3,'all') t3= struct2array(stats3) t=[t1;t2;t3] xmin = min(t); xmax = max(t); scale = xmax-xmin; tf=(x-xmin)/scale Was this a correct implementation? Also, I get an error at the last line. My output is: stats = Contrast: [0.0510 0.0503] Correlation: [0.9513 0.9519] Energy: [0.8988 0.8988] Homogeneity: [0.9930 0.9935] t1 = Columns 1 through 6 0.0510 0.0503 0.9513 0.9519 0.8988 0.8988 Columns 7 through 8 0.9930 0.9935 stats2 = Contrast: [0.0345 0.0339] Correlation: [0.8223 0.8255] Energy: [0.9616 0.9617] Homogeneity: [0.9957 0.9957] t2 = Columns 1 through 6 0.0345 0.0339 0.8223 0.8255 0.9616 0.9617 Columns 7 through 8 0.9957 0.9957 stats3 = Contrast: [0.0230 0.0246] Correlation: [0.7450 0.7270] Energy: [0.9815 0.9813] Homogeneity: [0.9971 0.9970] t3 = Columns 1 through 6 0.0230 0.0246 0.7450 0.7270 0.9815 0.9813 Columns 7 through 8 0.9971 0.9970 t = Columns 1 through 6 0.0510 0.0503 0.9513 0.9519 0.8988 0.8988 0.0345 0.0339 0.8223 0.8255 0.9616 0.9617 0.0230 0.0246 0.7450 0.7270 0.9815 0.9813 Columns 7 through 8 0.9930 0.9935 0.9957 0.9957 0.9971 0.9970 ??? Error using ==> minus Matrix dimensions must agree. The images are:

    Read the article

  • How to enable services Discovery API in GoogleCL?

    - by Marcos
    There are bits and pieces of information all over the place but I'm trying to put it all together so that GoogleCL finally accesses more than the initial 7 services. Does anyone know of a step-by-step? Right now any attempt outside these result in the error message: google tasks list Did you specify the service correctly? Must be one of 'picasa', 'blogger', 'youtube', 'docs', 'contacts', 'calendar', 'finance' I installed GoogleCL from the Ubuntu repos, authenticated a few bundled services like contacts, docs etc. and those work great, giving me access to do certain operations like upload from the command line. I would really like to get it going to support tasks and all the other elegible Google services shown at https://code.google.com/apis/explorer/#_s=tasks Here are some guides/partial steps I've found: http://code.google.com/p/googlecl/wiki/DiscoveryManual (indicates needing to check it out updated GoogleCL from the subversion repository.) http://code.google.com/p/google-api-python-client/wiki/Installation easy_install --upgrade google-api-python-client http://code.google.com/p/googlecl/wiki/Install http://code.google.com/p/googlecl/source/checkout sudo -i cd /usr/local/src/ svn checkout http://googlecl.googlecode.com/svn/trunk/ googlecl-read-only cat googlecl-read-only/INSTALL.txt cd /usr/local/src/googlecl-read-only/ python setup.py install Result: $ google discovery list Traceback (most recent call last): File "/usr/bin/google", line 488, in run_interactive run_once(options, args) File "/usr/bin/google", line 540, in run_once options.config) File "/usr/bin/google", line 364, in import_service force_gdata_v1 = config.lazy_get(package.SECTION_HEADER, AttributeError: 'module' object has no attribute 'SECTION_HEADER'

    Read the article

  • Any way to stop VMWare workstation from dropping SSH connections?

    - by oljones
    I have VMWare workstation 8 with a few Linux guests. I have had problems maintaining an active SSH connection to my VMs when they are in bridged mode. I first read that the onboard realtek network cards were not well supported so I bought a Intel Pro/1000 GT card. This supposedly had support. But this made no difference. Connections via SSH are active for about the first 3 minutes then hang and die. I have changed the TCP Checksum offload on the Intel and Realtek NICs, but this only works some of the time and even then not for very long. The best I could do was about 20 minutes before the connection was dropped. Any ideas?

    Read the article

  • accessing a web server from the LAN and WAN

    - by jessh
    My router does not support loopback. In order to view a webpage on my server, I either have to type in the local ip (192.168.1.201), or be on another network. What are my options for making this easier? Here are some possible things: Route all web traffic through an external proxy (seems to be overkill) Run my own DNS server (where to start?!) Buy a new router that supports loopback. Surely there is another way that I can use my laptop on the LAN and the WAN by typing in my domain more easily than these solutions.

    Read the article

  • How to keep windows 2003 Daylight Saving values updated

    - by SirMoreno
    My web app runs on windows 2003 .Net 3.5 I have users from Israel (GMT +2), and Israel switched to Daylight saving time on 26/3/10 so now it's (GMT +3). I use TimeZoneInfo.ConvertTime that doesn’t know the Daylight saving time switch is on 26/3/10 so it still converts to GMT +2. I asked on StackOverflow: http://stackoverflow.com/questions/2530834/problem-with-timezoneinfo-converttime-missed-the-daylight-saving-switch/2532104#2532104 And I was told that I need to update: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Time Zones\Israel Standard Time\Dynamic DST I found this update: http://support.microsoft.com/kb/976098 That supposes to fix the Dynamic DST for 2010, Is this the update I need? Where can I find an update that handles 2011 2012… ? Will I need to update my windows every year to get the DST right? Thanks.

    Read the article

  • How to view only Mail on shared email account?

    - by TomatoSandwich
    I have a support account which I should have access to in my Outlook 2010, however, since changing from 2003, the situation has become unusual. I used to be able to just view shared mail items in a seperate account, without it having the Calendar, Tasks and Reminders popping up in my face all hours of the day. Now, if I add the account, I get upwards of 60 task reminders that are not my personal account, and that clog up my Reminders window and task list. Is there a way to show only my Tasks and Reminders in Outlook 2010? I've tried the Advanced Filter option on the Tasks list, but if I set it to show only things from or to myself, everything disappears, or nothing disappears. I tried looking in the email account settings for something like 'Read email only' or something to do with only showing some of the modules of outlook, but it was useless.

    Read the article

  • With Monit, how do I restart a process when a directory timestamp check fails?

    - by Alterscape
    In my /etc/monit/monitrc I have the following lines: check process foo_server with pidfile /var/run/bwam_server.pid start program = "/Users/foo/foo_server.sh start" stop program = "/Users/foo/foo_server.sh stop" check directory foo_data path "/Users/foo/Library/Application Support/foo_server/data" if timestamp > 1 minute then alert #if timestamp > 1 minute then restart foo_server I know I shouldn't have some of this stuff in my home directory, but this aside: if I uncomment the last line, Monit tells me syntax error on foo_server -- but I am, as far as I understand, correctly defining the process -- how else do I reference it?

    Read the article

  • Caching API Proxy Server

    - by edc1591
    I need to have a server that caches API responses and then forwards them along to a desktop app. I don't really have much experience with this, so I have a few questions. First of all, what kind of server should I get? I already use Linode for my websites, so ideally I'd like to go with them. I expect to get anywhere from 30 million to 40 million requests to my proxy server each month. Will a 512 Linode be able to support that? Also, is there any software out there that does this already, or will I have to write my own? The API responses are roughly 10 KB each on average, so doing the math, that's a lot of data each month. Should I just add more transfer to whatever server I buy, or can I somehow compress the API responses before sending them off to the user? Thanks for any help.

    Read the article

  • Managing Social Relationships for the Enterprise – Part 1

    - by kellsey.ruppel
    By Reggie Bradford, Senior Vice President, Oracle  Today, Mark Hurd, President of Oracle, Thomas Kurian, Executive Vice President of Oracle and I discussed the strategic importance of how social media is impacting the enterprise and how it is changing the way customers, prospects employees and investors interact with brands worldwide.  Oracle understands that the consumer is in control and as such, brands must evolve and change to meet growing needs. In addition, according to social media thought leader and Analyst from Altimeter Group, Jeremiah Owyang, companies now average 178 corporate-owned social media accounts. When Oracle added leading social marketing, listening analytics and development tools from Vitrue, Collective Intellect and Involver to its Oracle’s Cloud Services Suite we went beyond providing a single set of tools. We developed an entire framework to include a comprehensive social relationship management suite to help companies move beyond the social enterprise and achieve the social-enabled enterprise.  The fundamental shift from transaction to engagement means that enterprises need not only a social strategy, but should also ensure that the information and data received from social initiatives flow back to marketing, sales, support and service. Doing so enables companies to deliver a proactive and compelling experience and provides analytics to turn engagement into opportunity – and ultimately that opportunity into revenue.  On September 13, 2012, I am delighted to sit down with Jeremiah to further the discussion about how enterprises are addressing social media strategies and managing content.  In addition, we will be taking your questions after the webinar via Twitter (@Oracle, @ReggieBradford, @cfinn, @jowyang). Use #oracle and #socbiz to submit questions and follow the conversation. I look forward to speaking with you and answering your questions online.  For more information about becoming a social-enabled enterprise, visit www.oracle.com/social. And don’t miss the insights of other social business thought leaders at www.oracle.com/goto/socialbusiness.

    Read the article

  • Partner Webcast – Oracle CRM: The Age of the Customer - 18 July 2013

    - by Thanos
    High-touch solutions for the complete customer experience How does Customer Relationship Management change in "the age of the customer", or does it at all? Customer relationship management has changed over the past years from a pure "inside out" point of view, where the customer is the center of attention to an "outside in" discipline where the customer has become the driving force. Away from the 360° view, through data to a holistic view of the customer’s journey and experience, through behavioral analysis and interaction across all touch points along a lifecycle of a customer relationship. Learn how this approach, integrating sales, service and marketing channels into one cohesive customer experience can drive customer experience and support acquisition, retention and efficiency in your customer relationship. With Oracle's Sales, Service and Marketing cloud offerings, you can be ahead of the game and provide a consistent and personalized voice to your customers, regardless of which channels you favor and your customers prefer. Integrated, cross-channel campaign automation and service delivery, as well as feedback-loops to sales automation, will provide you with tools to achieve top-of-the-line customer experience. Agenda · Oracle Customer Experience - Introduction into a new take on CRM · Oracle Sales Cloud - Integrated Salesforce Automation · Oracle Marketing Cloud - Cross-Channel Campaign Management · Oracle Service Cloud - Channel-blending in service delivery Delivery Format This FREE online LIVE eSeminar will be delivered over the Web. Registrations received less than 24 hours prior to start time may not receive confirmation to attend. Duration: 1 hour REGISTER NOW For any questions please contact us at partner.imc-AT-beehiveonline.oracle-DOT-com.

    Read the article

  • Partnering with your Applications – The Oracle AppAdvantage Story

    - by JuergenKress
    So, what is Oracle AppAdvantage? A practical approach to adopting cloud, mobile, social and other trends A guided path to aligning IT more closely with business objectives Maximizing the value of existing investments in applications A layered approach to simplifying IT, building differentiation and bringing innovation All of the above? Enhance the value of your existing applications investment with #Oracle #AppAdvantage Aligning biz and IT expectations on Simplifying IT, building Differentiation and Innovation #AppAdvantage Adopt a pace layered approach to extracting biz value from your apps with #AppAdvantage Bringing #cloud, #social, #mobile to your apps with #Oracle #AppAdvantage Embracing Situational IT In the next IT Leaders Editorial, Rick Beers discusses the necessity of IT disruption and #AppAdvantage. Rick Beers sheds light on the Situational Leadership and the path to success #AppAdvantage. Rick Beers draws parallels with CIO’s strategic thinking and #Oracle #AppAdvantage approach. Do you have this paper in your summer reading list? Aligning biz and IT #AppAdvantage What does Situational leadership have to do with Oracle AppAdvantage? Catch the next piece in Rick Beers’ monthly series of IT Leaders Editorial and find out. #AppAdvantage Middleware Minutes with Howard Beader – August edition In the quarterly column, @hbeader discusses impact of #cloud, #mobile, #fastdata on #middleware Making #cloud, #mobile, #fastdata a part of your IT strategy with #middleware What keeps the #oracle #middleware team busy? Find out in the inaugural post in quarterly update on #middleware Recent #middleware news update along with a preview of things to come from #Oracle, in @hbeader ‘s quarterly column In his inaugural post, Howard Beader, senior director for Oracle Fusion Middleware, discusses the recent industry trends including mobile, cloud, fast data, integration and how these are shaping the IT and business requirements. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Mix Forum Technorati Tags: AppAdvantage,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • PXE in an 802.1X environment

    - by newmanth
    My organization is about to implement 802.1X on our enterprise, but we currently use PXE-based OS deployment sequences in SCCM. I'm looking for a way to continue using PXE in an 802.1X environment. Our infrastructure uses Cisco network gear running at 12.2 (or newer). We are an all Windows network and all clients support 802.1X. All new workstations have Intel AMT available (but not factory configured). In a worst case scenario, we'll use a guest vlan for OSD, but I'd rather have the OSD occur in an authenticated session. I've seen white papers that describe using AMT to act as a supplicant for PXE boot, but can't find any implementation details...

    Read the article

  • off-the-shelf HDD in Dell Optiplex 755 - Invalid Replacement - Press F1 when rebooting

    - by Eric Liprandi
    Hi, I recently put an SSD into my work Optiplex 755. Since then, every time the system boots or reboots, I get prompted to hit F1 to continue with a message like HDD replacement is not valid. The system works just fine. What I am gathering from Dell's website is that I left the original drive internally as a data drive. And apparently the Optiplex does not support 2 drives in the configuration we purchased originally and complains. Any suggestions on how to get rid of this message? some suggested the MEBx or whatever from Intel and to turn it off, but I did not succeed earlier. It's not a big deal 80% of the time, but I regularly work from home and occasionally need to fire a reboot and well, you don't get the BIOS screen remotely :) Regards, Eric.

    Read the article

  • How do I implement the bg, &, and bg commands functionaliity in my custom unix shell program written in C

    - by user1631009
    I am trying to extend the functionality of my custom unix shell which I earlier wrote as part of my lab assignment. It currently supports all commands through execvp calls, in-built commands like pwd, cd, history, echo and export, and also redirection and pipes. Now I wanted to add the support for running a command in background e.g. $ls -la& Now I also want to implement bg and fg job control commands. I know this can be achieved if I execute the command by forking a new child process and not waiting for it in the parent process. But how do I again bring this command to foreground using fg? I have the idea of entering each background command in a list assigning each of them a serial number. But I don't know how do I make the processes execute in the background, then bring them back to foreground. I guess wait() and waitpid() system calls would come handy but I am not that comfortable with them. I tried reading the man pages but still am in the dark. Can someone please explain in a layman's language how to achieve this in UNIX system programming? And does it have something to do with SIGCONT and SIGSTP signals?

    Read the article

< Previous Page | 507 508 509 510 511 512 513 514 515 516 517 518  | Next Page >