Search Results

Search found 22803 results on 913 pages for 'customer support sr'.

Page 627/913 | < Previous Page | 623 624 625 626 627 628 629 630 631 632 633 634  | Next Page >

  • How can I keep websites from knowing where I live?

    - by D Connors
    This questions is related to issues and practicality, not security. I live in Brazil and, apparently, every single website I visit knows about it. Usually that's ok, but there are quite a few sites that don't make use of that information adequately. For instance: Bing keeps thinking that brazilian pages are way more relevant to me than american ones (which they're not). Google.com always redirects me to google.com.br. Microsoft automatically sends me to horribly translated support pages in portuguese (which would just be easier to read in english). These are just a few examples. Usually it's stuff I can live with (or work around), but some of them are just plain irritating. I have geolocation disabled in firefox, so I guess they're either getting this information from my IP or from windows itself (which I bought here). Is there a way to avoid this? Either tell them nothing or make them think I live somewhere else? Thanks

    Read the article

  • What is wrong with my game loop/mechanic?

    - by elias94xx
    I'm currently working on a 2d sidescrolling game prototype in HTML5 canvas. My implementations so far include a sprite, vector, loop and ticker class/object. Which can be viewed here: http://elias-schuett.de/apps/_experiments/2d_ssg/js/ So my game essentially works well on todays lowspec PC's and laptops. But it does not on an older win xp machine I own and on my Android 2.3 device. I tend to get ~10 FPS with these devices which results in a too high delta value, which than automaticly gets fixed to 1.0 which results in a slow loop. Now I know for a fact that there is a way to implement a super smooth 60 or 30 FPS loop on both devices. Best example would be: http://playbiolab.com/ I don't need all the chunk and debugging technology impact.js offers. I could even write a super simple game where you just control a damn square and it still wouldn't run on a equally fast 30 or 60 fps. Here is the Loop class/object I'm using. It requires a requestAnimationFrame unify function. Both devices I've tested my game on support requestAnimationFrame, so there is no interval fallback. var Loop = function(callback) { this.fps = null; this.delta = 1; this.lastTime = +new Date; this.callback = callback; this.request = null; }; Loop.prototype.start = function() { var _this = this; this.request = requestAnimationFrame(function(now) { _this.start(); _this.delta = (now - _this.lastTime); _this.fps = 1000/_this.delta; _this.delta = _this.delta / (1000/60) > 2 ? 1 : _this.delta / (1000/60); _this.lastTime = now; _this.callback(); }); }; Loop.prototype.stop = function() { cancelAnimationFrame(this.request); };

    Read the article

  • The Best Data Integration for Exadata Comes from Oracle

    - by maria costanzo
    Oracle Data Integrator and Oracle GoldenGate offer unique and optimized data integration solutions for Oracle Exadata. For example, customers that choose to feed their data warehouse or reporting database with near real-time throughout the day, can do so without decreasing  performance or availability of source and target systems. And if you ask why real-time, the short answer is: in today’s fast-paced, always-on world, business decisions need to use more relevant, timely data to be able to act fast and seize opportunities. A longer response to "why real-time" question can be found in a related blog post. If we look at the solution architecture, as shown on the diagram below,  Oracle Data Integrator and Oracle GoldenGate are both uniquely designed to take full advantage of the power of the database and to eliminate unnecessary middle-tier components. Oracle Data Integrator (ODI) is the best bulk data loading solution for Exadata. ODI is the only ETL platform that can leverage the full power of Exadata, integrate directly on the Exadata machine without any additional hardware, and by far provides the simplest setup and fastest overall performance on an Exadata system. We regularly see customers achieving a 5-10 times boost when they move their ETL to ODI on Exadata. For  some companies the performance gain is even much higher. For example a large insurance company did a proof of concept comparing ODI vs a traditional ETL tool (one of the market leaders) on Exadata. The same process that was taking 5hrs and 11 minutes to complete using the competing ETL product took 7 minutes and 20 seconds with ODI. Oracle Data Integrator was 42 times faster than the conventional ETL when running on Exadata.This shows that Oracle's own data integration offering helps you to gain the most out of your Exadata investment with a truly optimized solution. GoldenGate is the best solution for streaming data from heterogeneous sources into Exadata in real time. Oracle GoldenGate can also be used together with Data Integrator for hybrid use cases that also demand non-invasive capture, high-speed real time replication. Oracle GoldenGate enables real-time data feeds from heterogeneous sources non-invasively, and delivers to the staging area on the target Exadata system. ODI runs directly on Exadata to use the database engine power to perform in-database transformations. Enterprise Data Quality is integrated with Oracle Data integrator and enables ODI to load trusted data into the data warehouse tables. Only Oracle can offer all these technical benefits wrapped into a single intelligence data warehouse solution that runs on Exadata. Compared to traditional ETL with add-on CDC this solution offers: §  Non-invasive data capture from heterogeneous sources and avoids any performance impact on source §  No mid-tier; set based transformations use database power §  Mini-batches throughout the day –or- bulk processing nightly which means maximum availability for the DW §  Integrated solution with Enterprise Data Quality enables leveraging trusted data in the data warehouse In addition to Starwood Hotels and Resorts, Morrison Supermarkets, United Kingdom’s fourth-largest food retailer, has seen the power of this solution for their new BI platform and shared their story with us. Morrisons needed to analyze data across a large number of manufacturing, warehousing, retail, and financial applications with the goal to achieve single view into operations for improved customer service. The retailer deployed Oracle GoldenGate and Oracle Data Integrator to bring new data into Oracle Exadata in near real-time and replicate the data into reporting structures within the data warehouse—extending visibility into operations. Using Oracle's data integration offering for Exadata, Morrisons produced financial reports in seconds, rather than minutes, and improved staff productivity and agility. You can read more about Morrison’s success story here and hear from Starwood here. From an Irem Radzik article.

    Read the article

  • SOA Community Newsletter March 2012

    - by JuergenKress
    Dear SOA partner community member Thank you for your excellent feedback on the brand new Patch Set 5! PS5 is a combined release of all Fusion Middleware components. We recommend you to use this version for all your upcoming projects. We are very keen to know your feedback on PS5. We request you to please send it across to us, especially if your fist customers are in production. Edwin, Roland and Demed from product management published a joint paper Start Small, Grow Fast. Please let us know if you are interested to write a joint paper. Till we meet again! Till we meet again! To read the newsletter please visit http://tinyurl.com/soanewsMarch2012 (OPN Account required) To become a member of the SOA Partner Community please register at http://www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: SOA Community newsletter,SOA Community,Oracle,OPN,Jürgen Kress,WebLogic 12c,SOA Implementation Assessment,BPM Implementation Assessment,SOA Certification,SOA Specialist,BPM Certification,BPM Specialist,SOA Suite for Healthcare Integration,SOA Community Forum,SOA Specialization

    Read the article

  • Oracle Spatial and Graph – A year in review

    - by Mandy Ho
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} What a great year for Oracle Spatial! Or shall I now say, Oracle Spatial and Graph, with our official name change this summer. There were so many exciting events and updates we had this year, and this blog will review and link to some of the events you may have missed over the year. We kicked off 2012 with our webinar: Situational Analysis at OnStar with Oracle Spatial and Graph. We collaborated with OnStar’s Emergency Strategy and Outreach expert, Jeff Joyner ,on how Onstar uses Google Earth Visualization, NAVTEQ data and Oracle Database to deliver fast, accurate emergency services to its customers. In the next webinar in our 2012 series, Oracle partner TARGUSinfo showcased how to build a robust, scalable and secure customer relationship management systems – with built-in mapping and spatial analysis, and deployed in the cloud. This is a very cool system using all Oracle technologies including Oracle Database and Fusion Middleware MapViewer. Attendees learned how to gather market insight, score prospects and customers and perform location analysis. The replay is available here. Our final webinar of the year focused on using Oracle Business Intelligence tools, along with Oracle Spatial and Graph to perform location-aware predictive analysis. Watch the webcast here: In June, we joined up with the Location Intelligence conference in Washington, DC, and had a very successful 2012 Oracle Spatial User Conference. Customers and partners from the US, as well as from EMEA and Asia, flew in to share experiences and ideas, and get technical updates from Oracle experts. Users were excited to hear about spatial-Exadata performance, and advances in MapViewer and BI. Peter Doolan of Oracle Public Sector kicked off the event with a great keynote, and US Census, NOAA, and Ordnance Survey Great Britain were just a few of the presenters. Presentation archive here. We recognized some of the most exceptional partners and customers for their contributions to advancing mainstream solutions using geospatial technologies. Planning for 2013’s conference has already started. Please contribute your papers for consideration here. http://www.locationintelligence.net/ We also launched a new Oracle PartnerNetwork Spatial Specialization – to enable partners to get validated in the marketplace for their expertise in taking solutions to market. Individuals can also get individual certifications. Learn more here. Oracle Open World was not to disappoint, with news regarding our next Oracle Spatial and Graph release, as well as the announcement of our new Oracle Spatial and Graph SIG board! Join the SIG today. One more exciting event as we look to 2013. Spatial and location technologies have a dedicated track at the January BIWA SIG Summit – on January 9-10 in Redwood Shores, CA. View the agenda and register here: www.biwasummit.org. We thank you for all your support during the year of 2012 and look towards an even more exciting 2013! Wishing you and your family a prosperous New Year and Happy Holidays!

    Read the article

  • Should the syntax for disabling code differ from that of normal comments?

    - by deltreme
    For several reasons during development I sometimes comment out code. As I am chaotic and sometimes in a hurry, some of these make it to source control. I also use comments to clarify blocks of code. For instance: MyClass MyFunction() { (...) // return null; // TODO: dummy for now return obj; } Even though it "works" and alot of people do it this way, it annoys me that you cannot automatically distinguish commented-out code from "real" comments that clarify code: it adds noise when trying to read code you cannot search for commented-out code for for instance an on-commit hook in source control. Some languages support multiple single-line comment styles - for instance in PHP you can either use // or # for a single-line comment - and developers can agree on using one of these for commented-out code: # return null; // TODO: dummy for now return obj; Other languages - like C# which I am using today - have one style for single-line comments (right? I wish I was wrong). I have also seen examples of "commenting-out" code using compiler directives, which is great for large blocks of code, but a bit overkill for single lines as two new lines are required for the directive: #if compile_commented_out return null; // TODO: dummy for now #endif return obj; So as commenting-out code happens in every(?) language, shouldn't "disabled code" get its own syntax in language specifications? Are the pro's (separation of comments / disabled code, editors / source control acting on them) good enough and the cons ("shouldn't do commenting-out anyway", not a functional part of a language, potential IDE lag (thanks Thomas)) worth sacrificing? Edit I realise the example I used is silly; the dummy code could easily be removed as it is replaced by the actual code.

    Read the article

  • IPv6 6to4 on Windows Server

    - by Graham Wager
    I'm looking for a relatively simple guide to setting up IPv6 properly on a home network. This network currently has a server (Windows Server 2008R2) running RRAS that establishes connectivity to the internet using a demand-dial PPPoE connection and handles the NAT. It also hosts a DNS server and DHCP. My ISP does not support IPv6, but I have a static IPv4 address. I've read about 6to4 and signed up at tunnelbroker.net, but quickly felt out of my depth. How do I configure my network to use it, and how I should configure my DHCP server with regards to IPv6 addresses?

    Read the article

  • Windows Event Viewer - XML Custom Filter

    - by Frank
    <QueryList> <Query Id="0" Path="Application"> <Select Path="Application"> *[EventData[Data and (Data="Error")]] </Select> </Query> </QueryList> I believe the above XML custom filter would work if I wanted to check for Events where "Data" equals the word "Error". However, what I want to express is that I want the Events where Data CONTAINS the word "Error" . . . how do I express that? I've Goggled around, but I can find no references to Regular Expression like pattern matching in the Event Viewer. XPath has "contains", but if Event Viewer will support it, I cannot seem to figure out the syntax for invoking it.

    Read the article

  • How do I install 10.04 on a Vortex86DX embdedded system?

    - by mathematician1975
    I am trying to install Ubuntu on a Netcom NC-499 board that contains a Vortex86DX processor. The processor vendor claims support for Ubuntu 10.04 but I am having problems installing it. I am trying to install to a 8GB compact flash card attached to the board with an IDE connector, using a USB connection CD-Rom drive and a burned ISO image obtained from this link http://old-releases.ubuntu.com/releases/10.04.0/ubuntu-10.04.3-desktop-i386.iso . Installation proceeds up to the point of around 78% but during the stage where the installer informs me that it is "configuring apt", the installer terminates with a popup dialog containing the following "The installer encountered an unrecoverable error. A desktop session will now be run so that you may investigate the problem or try installing again." I have no idea what to do at this point. I am a Linux novice and I do not really know how to investigate the problems with the installation. I have configured the BIOS exactly according to how the vendor specifies and they assure me that this version is fully compatible with their hardware and yet I am unable to get a decent install. I am able to install Ubuntu 8.04 using exactly the same procedure successfully so I am sure there is no problem with my CD-Rom compatibility or the compact flash drive. Any help will be gratefully received.

    Read the article

  • Non-Windows, non-Unix-like OS's?

    - by dsimcha
    Since most operating systems I've heard of besides Windows seem to derive their heritage from Unix, I've been curious whether any OS's with the following characteristics exist: Not generally considered Unix-like, i.e. wasn't designed with Unix compatibility as a primary goal, doesn't use X11 as its default GUI in the most common distributions, doesn't support Unix commands by default, etc. Not in the Windows NT family. Is a modern production operating system, not a purely legacy operating system, a research/hobby project or an OS that's still in an alpha state. Is targeted at commodity x86/x64 PC hardware.

    Read the article

  • Windows 2003 and 2008 AD integrated DNS zones

    - by floyd
    We have a Windows 2003 server DC1 which is our primary DC holding all FSMO roles. It also is a DNS server for our domain domain.local which is an active directory integrated zone. We also have a Windows 2008 DC name DC2 All servers have the correct DNS entries etc. However on all dns servers there are event id 4515 indicating there are duplicate zones in separate directory partitions and only one will be used until the other is removed. And I see these, there is a zone for domain.local under the default naming partition CN=System, CN=MicrosoftDNS, DC=domain.local. As well as the DomainDNSZones partition DC=DomainDNSZones, DC=DOMAIN, DC=local, CN=MicrosoftDNS It seems that the partition in the Default Naming partition is the one which is being used currently. Which one should be in use? How do I make the EventID 4515's go away? EventID 4515: http://support.microsoft.com/kb/867464 Thanks

    Read the article

  • What is the aim of this email? Is this a ping/sping? [closed]

    - by mplungjan
    Hi, I received this spam in my catch-all. As a webmaster of the domain it was sent to, I am really curious what the reason for this mail is. It was sent to a non-existent user "tania" on my domain - here I used mydomain.zzz - what do the sender want to achieve? Since many mail servers have stopped backscattering, not getting a bounce would not mean anything, would it? And if this is off topic, where inb the StackExchange WOULD it be on topic? Delivered-To: [email protected] Received: (qmail 8015 invoked from network); 27 Jan 2011 02:32:47 -0000 Received: from unknown (HELO p3pismtp01-021.prod.phx3.secureserver.net) ([10.6.12.26]) (envelope-sender <[email protected]>) by smtp35.prod.mesa1.secureserver.net (qmail-1.03) with SMTP for <[email protected]>; 27 Jan 2011 02:32:47 -0000 X-IronPort-Anti-Spam-Result: At4FAAlnQE1GVjtCVGdsb2JhbACWXo4gCwEWCA0YJLwyhU8EhRc Received: from mx.dt3ls.com ([70.86.59.66]) by p3pismtp01-021.prod.phx3.secureserver.net with ESMTP; 26 Jan 2011 19:32:47 -0700 Received: from 70.86.59.66 by mx.dt3ls.com (Merak 8.9.1) with ASMTP id JXF39710 for <[email protected]>; Wed, 26 Jan 2011 17:31:10 -0500 Return-Path: [email protected] Status: Message-ID: <20110126173109.4d9d6c3f2b@1c3c> From: "Tech Support" <[email protected]> To: <[email protected]> Subject: Information, as instructed. Date: Wed, 26 Jan 2011 17:31:09 -0500 X-Priority: 3 X-Mailer: General-Mailer v.3 MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Quote: I give it to you not that you may remember time, but that you might forget it now and then for a moment and not spend all your breath trying to conquer it. Because no battle is ever won he said. They are not even fought. The field reveals to a man his own folly and despair, and victory is an illusion of philosophers and fools. William Faulkner The Sound and the Fury

    Read the article

  • Stale statistics on a newly created temporary table in a stored procedure can lead to poor performance

    - by sqlworkshops
    When you create a temporary table you expect a new table with no past history (statistics based on past existence), this is not true if you have less than 6 updates to the temporary table. This might lead to poor performance of queries which are sensitive to the content of temporary tables.I was optimizing SQL Server Performance at one of my customers who provides search functionality on their website. They use stored procedure with temporary table for the search. The performance of the search depended on who searched what in the past, option (recompile) by itself had no effect. Sometimes a simple search led to timeout because of non-optimal plan usage due to this behavior. This is not a plan caching issue rather temporary table statistics caching issue, which was part of the temporary object caching feature that was introduced in SQL Server 2005 and is also present in SQL Server 2008 and SQL Server 2012. In this customer case we implemented a workaround to avoid this issue (see below for example for workarounds).When temporary tables are cached, the statistics are not newly created rather cached from the past and updated based on automatic update statistics threshold. Caching temporary tables/objects is good for performance, but caching stale statistics from the past is not optimal.We can work around this issue by disabling temporary table caching by explicitly executing a DDL statement on the temporary table. One possibility is to execute an alter table statement, but this can lead to duplicate constraint name error on concurrent stored procedure execution. The other way to work around this is to create an index.I think there might be many customers in such a situation without knowing that stale statistics are being cached along with temporary table leading to poor performance.Ideal solution is to have more aggressive statistics update when the temporary table has less number of rows when temporary table caching is used. I will open a connect item to report this issue.Meanwhile you can mitigate the issue by creating an index on the temporary table. You can monitor active temporary tables using Windows Server Performance Monitor counter: SQL Server: General Statistics->Active Temp Tables. The script to understand the issue and the workaround is listed below:set nocount onset statistics time offset statistics io offdrop table tab7gocreate table tab7 (c1 int primary key clustered, c2 int, c3 char(200))gocreate index test on tab7(c2, c1, c3)gobegin trandeclare @i intset @i = 1while @i <= 50000begininsert into tab7 values (@i, 1, ‘a’)set @i = @i + 1endcommit trangoinsert into tab7 values (50001, 1, ‘a’)gocheckpointgodrop proc test_slowgocreate proc test_slow @i intasbegindeclare @j intcreate table #temp1 (c1 int primary key)insert into #temp1 (c1) select @iselect @j = t7.c1 from tab7 t7 inner join #temp1 t on (t7.c2 = t.c1)endgodbcc dropcleanbuffersset statistics time onset statistics io ongo–high reads as expected for parameter ’1'exec test_slow 1godbcc dropcleanbuffersgo–high reads that are not expected for parameter ’2'exec test_slow 2godrop proc test_with_recompilegocreate proc test_with_recompile @i intasbegindeclare @j intcreate table #temp1 (c1 int primary key)insert into #temp1 (c1) select @iselect @j = t7.c1 from tab7 t7 inner join #temp1 t on (t7.c2 = t.c1)option (recompile)endgodbcc dropcleanbuffersset statistics time onset statistics io ongo–high reads as expected for parameter ’1'exec test_with_recompile 1godbcc dropcleanbuffersgo–high reads that are not expected for parameter ’2'–low reads on 3rd execution as expected for parameter ’2'exec test_with_recompile 2godrop proc test_with_alter_table_recompilegocreate proc test_with_alter_table_recompile @i intasbegindeclare @j intcreate table #temp1 (c1 int primary key)–to avoid caching of temporary tables one can create a constraint–but this might lead to duplicate constraint name error on concurrent usagealter table #temp1 add constraint test123 unique(c1)insert into #temp1 (c1) select @iselect @j = t7.c1 from tab7 t7 inner join #temp1 t on (t7.c2 = t.c1)option (recompile)endgodbcc dropcleanbuffersset statistics time onset statistics io ongo–high reads as expected for parameter ’1'exec test_with_alter_table_recompile 1godbcc dropcleanbuffersgo–low reads as expected for parameter ’2'exec test_with_alter_table_recompile 2godrop proc test_with_index_recompilegocreate proc test_with_index_recompile @i intasbegindeclare @j intcreate table #temp1 (c1 int primary key)–to avoid caching of temporary tables one can create an indexcreate index test on #temp1(c1)insert into #temp1 (c1) select @iselect @j = t7.c1 from tab7 t7 inner join #temp1 t on (t7.c2 = t.c1)option (recompile)endgoset statistics time onset statistics io ondbcc dropcleanbuffersgo–high reads as expected for parameter ’1'exec test_with_index_recompile 1godbcc dropcleanbuffersgo–low reads as expected for parameter ’2'exec test_with_index_recompile 2go

    Read the article

  • /lib/i386-linux-gnu/libc.so.6 causing segmentation fault & session crash

    - by Fred Zimmerman
    I am having repeated and frequent crashes ending session whenever I take certain actions such as loading gmail under Chrome. Oddly, the same is not happening when I go to gmail under Chrome. After rooting around in /var/logs it appears to me that he trigger is something to do with libc.so.6 (see below). How can I fix this? 23936.947] [ 23936.947] Backtrace: [ 23936.948] 0: /usr/bin/X (xorg_backtrace+0x49) [0xb7745089] [ 23936.948] 1: /usr/bin/X (0xb75bf000+0x189d7a) [0xb7748d7a] [ 23936.948] 2: (vdso) (__kernel_rt_sigreturn+0x0) [0xb759c40c] [ 23936.948] 3: /usr/bin/X (0xb75bf000+0xfade7) [0xb76b9de7] [ 23936.948] 4: /usr/bin/X (ValidatePicture+0x1d) [0xb76bcb8d] [ 23936.949] 5: /usr/bin/X (CompositePicture+0xc3) [0xb76bcc83] [ 23936.949] 6: /usr/lib/xorg/modules/drivers/intel_drv.so (0xb6f18000+0xcf542) [0xb6fe7542] [ 23936.949] 7: /usr/bin/X (0xb75bf000+0x10b1d7) [0xb76ca1d7] [ 23936.949] 8: /usr/bin/X (CompositeGlyphs+0xc4) [0xb76b6d84] [ 23936.949] 9: /usr/bin/X (0xb75bf000+0x104956) [0xb76c3956] [ 23936.949] 10: /usr/bin/X (0xb75bf000+0xfe6f1) [0xb76bd6f1] [ 23936.949] 11: /usr/bin/X (0xb75bf000+0x3798d) [0xb75f698d] [ 23936.949] 12: /usr/bin/X (0xb75bf000+0x253ba) [0xb75e43ba] **[ 23936.950] 13: /lib/i386-linux-gnu/libc.so.6 (__libc_start_main+0xf3) [0xb721d4d3] [ 23936.950] 14: /usr/bin/X (0xb75bf000+0x256f9) [0xb75e46f9] [ 23936.950] [ 23936.950] Segmentation fault at address 0x155 [ 23936.950] Caught signal 11 (Segmentation fault). Server aborting [ 23936.950] Please consult the The X.Org Fou**ndation support at http://wiki.x.org for help. [ 23936.950] Please also check the log file at "/var/log/Xorg.0.log" for additional information.

    Read the article

  • Hyper-V management remotely

    - by Péter
    I'll tell you in advance that I'm newbie in the topic. I have a Win8 (Home) machine with Hyper-V installed behind a router. The router has a public IP and a domain attached. I have another Win8 (Work) machine also installed Hyper-V. I want to access to my home Hyper-V via Hyper-V Manager so I can manage my virtual machines from work. I found this article but I don't know if it's applicable to me. I thought that a simple port forwarding should work and I only need to do is grant the Work HV manager my domain and the port I choose and if it's pop a login form I only need to fill the user data of my Home computer? How can I solve this? My thoughts revolve around: - Port forwarding - set domain+port and set my home user - Set up a VPN and use the local ip address of my home computer (it looks like a little cumbersome and my router only support PPTP) I'm open to any other solution too. Thanks, Péter

    Read the article

  • BPM Solution Catalogue–promote your process templates

    - by JuergenKress
    Oracle’s BPM Solution Catalogue showcasing our solutions with partners is now live. Take a look at the initial entries here. We are planning to use this catalogue not only to publish and highlight successful BPM implementations but also will be running campaigns in industry verticals with your solutions. If you have delivered a successful implementation in BPM and think it could be reused and applied again in a similar scenario in the same industry or in a similar environment, then we are keen to know about it and will add it to the solution catalogue. The solution catalogue will showcase successful BPM solutions both inside and outside Oracle. Be in touch with us on this via this e-mail id and we will make sure to add your solution. For more information you can also read the article “Leading-Edge BPM Benefits Without Bleeding-Edge Pain” SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: BPM,Solution Catalogue,process templates,BPM Suite,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • Request format is unrecognized for URL unexpectedly ending exception in web service.

    - by Jalpesh P. Vadgama
    Recently I was getting error when I am calling web service using Java script. I searching on net and debugging I have found following things. Any web service support three kinds of protocol HttpGet,HttpPost and SOAP. In framework 1.0 it was enabled by default but after 1.0 framework it will not be enabled by default due to security issues and WS-Specifications. So we have to enabled them via putting configuration settings in web.config. Here is the code for that. <configuration> <system.web> <webservices> <protocols> <add name="HttpGet"></add> <add name="HttpPost"></add> </protocols> </webservices> </system.web> </configuration> Hope this will help you. Stay tuned for more. Till that Happy programming!!!. Technorati Tags: WebService,Request,Javascript,Ajax

    Read the article

  • How can I use 2 monitors plus laptop with my Dell e6420 w/ Nvidia nvs 4200m

    - by KallDrexx
    I have just hooked up a 2nd external monitor to my Dell e6420 laptop with a Nvidia NVS 4200m graphics card, running Windows 8 64 bit. However, the computer won't let me have both monitors and the laptop display active at the same time. I installed the latest Nvidia graphics drivers (310.70) but it claims that my GPU can only support up to 2 monitors. Nivdia's website implies differently (as does various other laptops around the office). The monitors are connected both via DVI to my dell docking station that has multiple DVI ports. Both monitors are working correctly, I just can't get all 3 working together. Attempting to download the driver from dell fails, as their driver installer is broken apparently Any ideas?

    Read the article

  • Any good method for mounting Hadoop HDFS from another system?

    - by Beel
    I want to mount the Cloudera Hadoop as a Linux file system over the LAN. As a setup, I already have the hadoop cluster running on a set of Ubuntu machines. But now I need to be able to use it as a normal file system from a Fedora system over the LAN. I tried FUSe but two things: 1. Cloudera says FUSE loses data (click here for that comment by a Cloudera employee on the official Cloudera support site) 2. I've had no success making it work the way we want As a point of clarification, I am using Hadoop ONLY for the file system, not for its other capabilities.

    Read the article

  • Taking HRMS to the Cloud to Simplify Human Resources Management

    - by HCM-Oracle
    By Anke Mogannam With human capital management (HCM) a top-of-mind issue for executives in every industry, human resources (HR) organizations are poised to have their day in the sun—proving not just their administrative worth but their strategic value as well.  To make good on that promise, however, HR must modernize. Indeed, if HR is to act as an agent of change—providing the swift reallocation of employees  and the rapid absorption of employee data required for enterprises to shift course on a dime—it must first deal with the disruptive change at its own front door. And increasingly, that means choosing the right technology and human resources management system (HRMS) for managing the entire employee lifecycle. Unfortunately, for most organizations, this task has proved easier said than done. This is because while much has been written about advances in HRMS technology, until recently, most of those advances took the form of disparate on-premises solutions designed to serve very specific purposes. Although this may have resulted in key competencies in certain areas, it also meant that processes for core HR functions like payroll and benefits were being carried out in separate systems from those used for talent management, workforce optimization, training, and so on. With no integration—and no single system of record—processes were disconnected, ease of use was impeded, user experience was diminished, and vital data was left untapped.  Today, however, that scenario has begun to change, and end-to-end cloud-based HCM solutions have moved from wished-for innovations to real-life solutions. Why, then, have HR organizations been so slow in adopting them? The answer—it would seem—is, “It’s complicated.” So complicated, in fact, that 45 percent of the respondents to PwC’s “Annual HR Technology Survey” (for 2013) reported having no formal HR software roadmap, and 40 percent stated that they “did not know” whether their organizations would be increasing their use of cloud or software as a service (SaaS) for HR.  Clearly, HR organizations need help sorting through the morass of HR software options confronting them. But just as clearly, there’s an enormous opportunity awaiting those that do. The trick will come in charting a course that allows HR to leverage existing technology while investing in the cloud-based solutions that will deliver the end-to-end processes, easy-to-understand analytics, and superior adaptability required to simplify—and add value to—every aspect of employee management. The Opportunity therefore is to cut costs, drive Innovation, and increase engagement by moving to cloud-based HCM.  Then you will benefit from one Interface, leverage many access points, and  gain at-a-glance insight across your entire workforce. With many legacy on-premises HR systems not being efficient anymore and cloud-based, integrated systems that span the range of HR functions finally reaching maturity, the time is ripe for moving core HR to the cloud. Indeed, for the first time ever there are more HRMS replacement initiatives than HRMS upgrade initiatives under way, and the majority of them involve moving to the cloud per Cedar Crestone’s 2013-2014 HRMS survey. To learn how you can launch your own cloud HCM initiative and begin using HR to power the enterprise, visit Oracle HRMS in the Cloud and Oracle’s new customer 2 cloud program. Anke Mogannam brings more than 16 years of marketing and human capital management experience in the technology industries to her role at Oracle where she is part of the Human Capital Management applications marketing team. In that role, Anke drives content marketing, messaging, go-to-market activities, integrated marketing campaigns, and field enablement. Prior to joining Oracle, Anke held several roles in communications, marketing, HCM product strategy and product management at PeopleSoft, SAP, Workday and Saba. Follow her on Twitter @amogannam

    Read the article

  • Mac OS x 10.4 Tiger Will Not Boot

    - by keithbrown38
    My G5 Tower all of a sudden is ONLY booting to the Registration page, the where after you installed the software you are suppose to Register. I cant get it to boot in safe mode, so I am using my usb 2.0 SATA cable to connect my drive to a my macbook pro and I am using (CCC) Disc to Disc Clone and it is copying to my external WD passport drive and that makes a startup disc along with all my info from my drive. My question is was there anything easier and less time consuming than that? Does anyone know why this would happen all of a sudden? Thank You All in Advance for your help and support! Keith B

    Read the article

  • Our GoDaddy web server is drowning in temp files!!

    - by temp file guy
    We have a virtual dedicated server with a fairly large amount of traffic. We use GoDaddy using CPanel. We have 10GIG of space of which about 80% is not our content but logs and server utilities. Godaddy support is evasive and they are trying to encourage us to migrate to new service with 15GIG. Reviewing the large files we found the following: We have a ton old TMP files at this directory. /public_html/files/TMP/FILE_PERSISTANCE_PROVIDER: (no access) some large files in these directories. /usr/local/apache/logs/ - suphp_log (220M) - access_log (7M) - error_log (5M) /usr/local/apache/domlogs/ (no access) /usr/local/cpanel/ (no access) /usr/local/cpanel-rollback /tmp Questions: What can we safely delete or truncate? How can we change permissions on files with no access to delete? Is there utility to monitor and clean up temp files Other files/programs that we can delete? thanks!

    Read the article

  • Should you put personal beliefs in your program?

    - by TheLQ
    Recently I've found two examples of programmer's personal beliefs in programs that have removed or crippled useful functionality uTorrent using KB (rarely used) vs Kb (what most ISPs and other programs use as their metric) in their current connection speed. Various attempts by others and me to give options to at least give an option to show in Kb have ended with "ISPs should use KB" Kleopatra (gpg4Win key manager) not having PGP Key Pictures since they "give a false sense of security" and "increase the size of certificates". While the latter is true, the former is debatable. Both of these hurt the program and its usefulness to me. uTorrent's forums used to be filled with people saying they have 10 Mb download pipe but uTorrent only goes up to 2 MB (not knowing that Mb != MB), and with feature requests to show in Mb. Kleopatra has lost usefulness to me since I don't have the functionality to add pictures to PGP keys. These all are political statements; developers attempting to make change in, in their minds important, issues. But should this come at a cost to end user functionality? If a programmer heavily believes X but everyone else believes Y, should the programmer refuse to add support for Y because in their mind X is horrible? In short, should a programmer make political statements in their program?

    Read the article

  • Best LXDE based distro/distro that supports LXDE?

    - by Misha Koshelev
    Lubuntu is nice - but it seems the LXDE version is not as up to date as Fedora LXDE Spin or even Debian squeeze with LXDE installed... I do like Chromium on Lubuntu though... its faster and a nice touch. So, any good recommendations? I am fairly used to Ubuntu and the dpkg/apt commands, but am willing to learn. I am looking for a lightweight 64-bit distribution for my main laptop (it is by no means "old" or "low spec" but I like that Lubuntu starts up in like 2 secs). Anyway as you can see I have a strong Lubuntu bias, but there are issues like: LXDE version seems not to be recent (esp in 10.04 version which seems to work more stably for me - with Nvidia drivers etc) 64 bit install is currently a pain - requires first install of minimal CD or alternate CD both of which required wired Ethernet, then install of lubuntu from PPA. Native 64-bit support would be nice. Linux Mint LXDE, for example, is also only 32-bit. Thank you so much

    Read the article

  • Easy Made Easier - Networking

    - by dragonfly
        In my last post, I highlighted the feature of the Appliance Manager Configurator to auto-fill some fields based on previous field values, including host names based on System Name and sequential IP addresses from the first IP address entered. This can make configuration a little faster and a little less subject to data entry errors, particularly if you are doing the configuration on the Oracle Database Appliance itself.     The Oracle Database Appliance Appliance Manager Configurator is available for download here. But why would you download it, if it comes pre-installed on the Oracle Database Appliance? A common reason for customers interested in this new Engineered System is to get a good idea of how easy it is to configure. Beyond that, you can save the resulting configuration as a file, and use it on an Oracle Database Appliance. This allows you to verify the data entered in advance, and in the comfort of your office. In addition, the topic of this post is another strong reason to download and use the Appliance Manager Configurator prior to deploying your Oracle Database Appliance.     The most common source of hiccups in deploying an Oracle Database Appliance, based on my experiences with a variety of customers, involves the network configuration. It is during Step 11, when network validation occurs, that these come to light, which is almost half way through the 24 total steps, and can be frustrating, whether it was a typo, DNS mis-configuration or IP address already in use. This is why I recommend as a best practice taking advantage of the Appliance Manager Configurator prior to deploying an Oracle Database Appliance.     Why? Not only do you get the benefit of being able to double check your entries before you even start on the Oracle Database Appliance, you can also take advantage of the Network Validation step. This is the final step before you review all the data and can save it to a text file. It can be skipped, if you aren't ready or are not connected to the network that the Oracle Database Appliance will be on. My recommendation, though, is to run the Appliance Manager Configurator on your laptop, enter the data or re-load a previously saved file of the data, and then connect to the network that the Oracle Database Appliance will be on. Now run the Network Validation. It will check to make sure that the host names you entered are in DNS and do resolve to the IP addresses you specifiied. It will also ping the IP Addresses you specified, so that you can verify that no other machine is already using them (yes, that has happened at customer sites).     After you have completed the validation, as seen in the screen shot below, you can review the results and move on to saving your settings to a file for use on your Oracle Database Appliance, or if there are errors, you can use the Back button to return to the appropriate screen and correct the data. Once you are satisfied with the Network Validation, just check the Skip/Ignore Network Validation checkbox at the top of the screen, then click Next. Is the Network Validation in the Appliance Manager Configurator required? No, but it can save you time later. I should also note that the Network Validation screen is not part of the Appliance Manager Configurator that currently ships on the Oracle Database Appliance, so this is the easiest way to verify your network configuration.     I hope you are finding this series of posts useful. My next post will cover some aspects of the windowing environment that gets run by the 'startx' command on the Oracle Database Appliance, since this is needed to run the Appliance Manager Configurator via a direct connected monitor, keyboard and mouse, or via the ILOM. If it's been a while since you've used an OpenWindows environment, you'll want to check it out.

    Read the article

< Previous Page | 623 624 625 626 627 628 629 630 631 632 633 634  | Next Page >