Search Results

Search found 21197 results on 848 pages for 'webcenter content'.

Page 561/848 | < Previous Page | 557 558 559 560 561 562 563 564 565 566 567 568  | Next Page >

  • Partner Webcast - Oracle Data Integration Competency Center (DICC): A Niche Market for services

    - by Thanos Terentes Printzios
    Market success now depends on data integration speed. This is why we collected all best practices from the most advanced IT leaders, simply to prove that a Data Integration competency center should be the primary new IT team you should establish. This is a niche market with unlimited potential for partners becoming, the much needed, data integration services provider trusted by customers. We would like to elaborate with OPN Partners on the Business Value Assessment and Total Economic Impact of the Data Integration Platform for End Users, while justifying re-organizing your IT services teams. We are happy to share our research on: The Economical impact of data integration platform/competency center. Justifying strongest reasons and differentiators, using numeric analysis and best-practice in customer case studies from specific industries Utilizing diagnostics and health-check analysis in building a business case for your customers What exactly is so special in the technology of Oracle Data Integration Impact of growing data volume and amount of data sources Analysis of usual solutions that are being implemented so far, addressing key challenges and mistakes During this partner webcast we will balance business case centric content with extensive numerical ROI analysis. Join us to find out how to build a unified approach to moving/sharing/integrating data across the enterprise and why this is an important new services opportunity for partners. Agenda: Data Integration Competency Center Oracle Data Integration Solution Overview Services Niche Market For OPN Summary Q&A Delivery Format This FREE online LIVE eSeminar will be delivered over the Web. Registrations received less than 24hours prior to start time may not receive confirmation to attend. Presenter: Milomir Vojvodic, EMEA Senior Business Development Manager for Oracle Data Integration Product Group Date: Thursday, September 4th, 10pm CEST (8am UTC/11am EEST)Duration: 1 hour Register Today For any questions please contact us at [email protected]

    Read the article

  • How to remove large number of files/folders in linux

    - by user1745713
    We are using hadoop to split a table into smaller files to feed to mahout, but in the process, we created a huge amount of _temporary logs. we have an nfs mount for the hadoop volume so we can use all the linux commands to delete folders files, but we just can't get them to be deleted, here's what I've tried so far: hadoop fs -rmr /.../_temporary : hangs for hours and does nothing on nfs mount: rmr -rf /.../_temporary :hangs for hours and does nothing find . -name '*.*' -type f -delete : same as above the folders look like this (38 of these folders inside _temporary): drwxr-xr-x 319324 user user 319322 Oct 24 12:12 _attempt_201310221525_0404_r_000000_0 the content of these are actually folders, not files. each one of those 319322 folders has exactly one file inside. not sure why the do the logging this way. Any help is appreciated.

    Read the article

  • How can I create an orthographic display that handles different screen dimensions?

    - by Piku
    I'm trying to create an iPad/iPhone game using GLES2.0 that contains a 3D scene with a heads-up-display/GUI overlaid on the top. However, this problem would also apply if I were to port my game to a computer and run the game in a resizable window, or allow the user to change screen resolutions... When trying to make the 2D GUI/HUD work I've made the assumption that all I'm really doing is drawing a load of 2D textured 'quads' on the screen and am trying to treat the orthographic projection as an old-style 2D display with 0,0 in the upper left and screenWidth,ScreenHeight in the lower right. This causes me all sorts of confusion when I rotate my ipad into Landscape mode since I can't work out what to put into my projection and modelview matrices to turn everything around the right way. It also gets messy if I want to support the iPad's large screen, an iPhone or a Retina display since I have to then draw three sets of textures for everything and work out which ones to use. Should I be trying to map the 2D OpenGL co-ords 1:1 with the screen? While typing out this question it occurs to me that I could keep my origin in the centre, still running -1/+1 along the axes. This would let me scale my 2D content appropriately on the different screen sizes, but wouldn't I end up with the textures being scaled and possibly losing quality? I'm using OpenGLES 2.0 and have a matrix library that has equivalents to the GLES1.1 glOrthof() and glFrustrum() calls.

    Read the article

  • Deleting Time Machine in Mac OS X 10.6.4

    - by cappuccino
    Does anyone know how to delete Time Machine in Mac OS X 10.6.4? Before answering: sudo rm -rf /whateverthetimemachineis does not work Disabling the ACL permissions first with sudo fsaclctl -p /whatever -d does not work, sudo: fsaclctl: command not found Use the delete all backup feature in Time Machine... this is slow as hell, would take days. Need a command line solution. No I don't want to reformat the drive, I have other content on it, and no don't say I should have separated on two partition or two drives, I did it this say since partitions cannot be dynamically changed, and two drives is annoying since, whats the point of having a big drive?... plus has no relation to the issue at hand. Already googlied for hours and read everything on Super User, nothing working. and all solutions are the first 4. Any clues?

    Read the article

  • Consume an XML Feed with PowerPoint 2010

    - by Matt Schweers
    Hi there. I'm looking for a way to consume an XML feed from a web-service directly into PowerPoint 2010. I found the LiveWeb plugin (http://skp.mvps.org/liveweb.htm) for PowerPoint that, while pretty cool, really only pulls in actual web content in a way that feels more like an iframe. Ideally, I would like to consume raw XML web service/feed with PowerPoint, parse it, and stylize the results. Is this possible? Even reading from a static XML file would be a good start.

    Read the article

  • Postfix not delivering mails

    - by Sotocan
    Hi all, I have problems with a recently configured postfix MTA. When postfix starts the following warning appears: "postfix/qmgr[5078]: warning: connect to transport private/filter: No such file or directory" I have amavis-new as a content-filter, but even if I comment-out the relevant line, the warning appears. As a result (I think), of the above, I get errors like below, for every virtual domain that I have: "postfix/error[5080]: 254851834107: to=, relay=none, delay=13082, delays=13082/0.01/0/0.01, dsn=4.3.0, status=deferred (mail transport unavailable)" The good news for me, is that somehow I managed to fix that (don't ask me how!!!!) The problem is that now I have 50 or so mails, that were affected by the aforementioned problem, in the mail-queue... If I "postqueue -f " I get the same style of error as before (mail transport unavailable)...however new mails are delivered to their final destination properly... Any suggestions? Kind regards. P.S. Local mail delivery from/to Unix and virtual users, was OK write from the beginning!

    Read the article

  • china and gmail attachs -

    - by doug
    "We have evidence to suggest that a primary goal of the attackers was accessing the Gmail accounts of Chinese human rights activists. Based on our investigation to date we believe their attack did not achieve that objective. Only two Gmail accounts appear to have been accessed, and that activity was limited to account information (such as the date the account was created) and subject line, rather than the content of emails themselves.” [source] I don't know much about how internet works, but as long the chines gov has access to the chines internet providers servers, why do they need to hack gmail accounts? I assume that i don't understand how submitting/writing a message(from user to gmail servers) works, in order to be sent later to the other email address. Who can tell me how submitting a message to a web form works?

    Read the article

  • Read the contents of a ComboBox (or any other windows control) [closed]

    - by Homer
    I have a program that reads the contacts from my phone, but I can't export them from it. I'd like to use another program to read the contents of the controls on the original program. In this case, I would like to export the content of a dropdown list (combobox), containing these contacts. Can someone recommend a good program for this or recommend another method? I know I saw something for this last year on lifehacker.com in a collection of diagnostic tools, but I can't find it now.

    Read the article

  • For a large website developed in PHP, is it necessary to have a framework?

    - by Martin
    I am wondering if it is necessary to have a framework or if it is a must-have if I plan to make a large website. Large website could mean a lot of things: in other words, multiple dynamic web pages (40-50 dynamic pages, mysql content) and a lot of visitors (+- a million hits per month). The site will be hosted in a dedicated server environment. I know that it could simplify coding for a developer team, that it includes libraries and a lot of advantages. But I just feel that I don't need that. I think that learning how it works, managing it and installing it would take more time and I could use that time to code. I write PHP the simplest way I could (with performance in mind) and I try to reuse my code/functions/classes most of the time and I make sure that if another developer joins the team, that he won't be lost in the code. I am also planning to use MemCached or another Cache for PHP. As I said, the site will be hosted in a dedicated server environment but will be entirely managed by the hosting company. I am pretty sure the control panel for me to control the basic stuff will be Cpanel. For a developer like me that only knows PHP, Javascript, HTML, CSS, MYSQL and really basic server management, I feel that it seems to complicated to have a framework. Am I wrong? Is it worth the time to learn all about it? Thank you for your opinions and suggestions.

    Read the article

  • Is there any way to optimize my search blob program?

    - by Vicky
    I written this code to search the blob items (text files) on the basis of there content. For ex : if I search for "Good", then the files that contains "Good or good" word the name of that files should appear in search result. My code is working but i want to optimize it. class BlobSearch { public static int num = 1; static void Main(string[] args) { string accountName = "accountName"; string accessKey = "accesskey"; string azureConString = "DefaultEndpointsProtocol=https;AccountName=" + accountName + ";AccountKey=" + accessKey; string blob = "MyBlobContainer"; string searchText = string.Empty; Console.WriteLine("Type and enter to search : "); searchText = Console.ReadLine(); CloudStorageAccount account = CloudStorageAccount.Parse(azureConString); CloudBlobClient blobClient = account.CreateCloudBlobClient(); CloudBlobContainer blobContainer = blobClient.GetContainerReference(blob); blobContainer.FetchAttributes(); var blobItemList = blobContainer.ListBlobs(); GetBlobList(searchText, blobContainer, blobItemList); Console.ReadLine(); } private static async void GetBlobList(string searchText, CloudBlobContainer blobContainer, IEnumerable<IListBlobItem> blobItemList) { foreach (var item in blobItemList) { string line = string.Empty; CloudBlockBlob blockBlob = blobContainer.GetBlockBlobReference(item.Uri.ToString()); if (blockBlob.Name.Contains(".txt")) { await Search(searchText, blockBlob); } } } private async static Task Search(string searchText, CloudBlockBlob blockBlob) { string text = await blockBlob.DownloadTextAsync(); if (text.ToLower().IndexOf(searchText.ToLower()) != -1) { Console.WriteLine("Result : " + num + " => " + blockBlob.Name.Substring(blockBlob.Name.LastIndexOf('/') + 1)); num++; } } } I think blobContainer.ListBlobs(); is blocking code because search will not work until all the blob items loaded. Is there anyway to optimize it or anywhere else in my code. Thanks

    Read the article

  • Move unity launcher to bottom of the screen

    - by argvar
    I have Ubuntu 13.04 DESKTOP version and for some odd reason I'm told that the Unity launcher cannot be moved to the bottom of the screen because of several reasons: 1. Canonical wants it there so it fits with their overall design goals, namely when it comes to touchscreen devices and netbooks. This in my mind totally ignores the fact that most Ubuntu users are DESKTOP users. No matter what Canonicals long term goal is, it surely mustn't be at the expense of needs of their core user base. 2. Most monitors are widescreen, the launcher is more compact where it is. This is not only taking away the users choice, but is also a wrong assessment. Widescreen monitors can sometimes be rotated on a pivot, giving it a portrait aspect. By displaying the Unity launcher on the left side it takes up a lot of space. Many desktop users have multiple monitors, and having the launcher on the left side of each monitor is very awkward. Also, many websites are catered to fit on a half 1920 display, so you can have two browser windows open side-by-side with all content visible. The placement of the Unity launcher takes away the horizontal space meaning there's less room for each browser window, and you'll see the right side of the web pages being occluded. Any suggestion to simply hide the Unity launcher, or "Canonical knows best" or "get used to it" are unwelcome and totally ignores the above points. Linux is about choice. Canonical's stubbornness with the Unity launcher placement is inconsistent with what Linux is about.

    Read the article

  • Identifying .doc/.docx files that contain images

    - by rev
    I'm moving my notes to evernote. To this end I need to convert .doc/.docx files to rtf. The reason for this is that I have a script to import rtf into evernote. However, some of my .doc/.docx files contain images. Is there any way to identify which .doc/.docx files contain images without viewing them all? I have thousands. This way I can simply open the few that have images and copy/paste the entire content straight into evernote. Should say that I'm using OS X 10.6.8.

    Read the article

  • Photos being copied all over the place

    - by plua
    We have a rather popular website with plenty of photos. Our whole business depends on our content - and the photos are important in this. We invest a lot of time, effort, and money into taking these pictures. On our website we have clear copyright notices, we have the website name and logo in the photos, and we have a Photo Licensing page which states the prices of licensing our photos. Despite all this, our photos are copied by personal and commercial websites alike. We really want to do something about this. We do not want them to take out the photos and leave it at that. We want them to pay for the usage, as we clearly state on our website. Now a few questions come to mind: Can we legally force them to pay right away? Or are we obligated to first write a "Cease and Desist" letter? Photos are used on websites throughout the world. Are there any worldwide rules for this? Has anybody experience with doing these things outside of their home country? Should we hire a lawyer in any country? Or could a local lawyer contact oversees companies directly?

    Read the article

  • What software can I use to create a video of following type?

    - by Bond
    Here is a video on this link http://www.youtube.com/watch?v=kSx873lOgIc&feature=player_embedded#at=62 Someone among my bosses wants to create some thing similar to the video on above link and has asked me what software can they use to do the same.The purpose is to create educational content only.Which can have the demonstrations (animations ) and audio also running in backend. I am not clear as what software can be used for this on Linux or Windows. I have users which use Windows and Linux both. I have used video editing on Mac using Final Cut Pro but the video on the above link is not some thing which can be achieved with FCP.(Or may be I am not aware) I am looking solution for 1) Linux users 2) Windows users In case of Linux it is Ubuntu for some users and Fedora for some others. I am a Linux guy so I am specifically posting this question in terms of Ubuntu but I also need suggestions for Windows type of users.I have no clue of such video animations at all.

    Read the article

  • Hosting and scaling a Facebook application in the cloud? [migrated]

    - by DhruvPathak
    We would be building a Facebook application in Django (Python), but still not sure of where to host it economically, and with a good provision to scale in case the app gets viral. Some details about the app: Would be HTML based like a website,using django as a framework. 100K is the number of expected pageviews in a day, if the app is viral. The users will not generate any media content, only some database data will be generated by them. It would be great if someone with more experience can guide on following points: A) Hosting on Google app engine or Amazon EC2 or some other cloud like RackSpace : Preferable points found in AppEngine were ease of deployment, cost effectiveness and easy scaling. For EC2: Full hold of the virtual machine,Amazon NoSQL and RDMBS database services in case we decide to use them. B) Does backend technology affect monthly cost? eg. would CPU and memory usage difference of Django over , for example , PHP framework like CodeIgnitor really make remarkable difference in running costs. (Here is the article that triggered this thought process : http://journal.dedasys.com/2010/01/12/rough-estimates-of-the-dollar-cost-of-scaling-web-platforms-part-i#comments) C) Does something like Heroku , which provides additional services over Amazon EC2, prove to be better than raw cloud management? It is not that we are trying for premature scaling, we just want to have a good start so that we are ready to handle unpredicted growth and scale.

    Read the article

  • trouble with AD and profile import

    - by GeorgeWNYC
    I am involuntary Admin for a MOSS 2007 site. We use profile import from AD, from two domains: Mycompany.com and AM.MyCompany.Com I was looking at the log for the PEOPLE_DL_IMPORT Content source and it has many entries like: spsimport://?$$dl$$/MyCompany.com/MyCompany.com/MyCompany.com/am.MyCompany.com/MyCompany.com/am.MyCompany.com/MyCompany.com/am.MyCompany.com/am.MyCompany.com/MyCompany.com/am.MyCompany.com/am.MyCompany.com/am.MyCompany.com/MyCompany.com/MyCompany.com/am.MyCompany.com/am.MyCompany.com It certainly doesn't look right. Is this normal? What can I do to remedy it ? Can I start over? There are users already in SP and some of them are in SP groups for permission purposes.

    Read the article

  • linux ftp server with virtual users

    - by kjertil
    i know there are already similar questions for this matter but the answers doesn't really make much sense to anyone who is not really technically comfortable in Linux. I've already tried articles like these for example: http://howto.gumph.org/content/setup-virtual-users-and-directories-in-vsftpd/ with the result of accidently breaking the whole system. The problem is that, while there are several technical possibilities to set up virtual users with a FTP server, it is not as easy as managing for instance a Filezilla server on Windows. I've seen some Web based GUI's but most of them seems to be out of date. The different flavours of Linux and the large amount of different popular FTP servers also seems to make the matter more complicated. I guess my question is, is there a way, to set up virtual FTP users on Linux without the hastle of having to manually edit PAM, MYSQL and config files?

    Read the article

  • btrfs won't run from cron

    - by Mikkel
    I'm trying to set up a cron job to create a btrfs subvolume snapshot of my root partition. The command works perfectly if I run it from the command line, but nothing happens at the scheduled cron time. I've tried piping to logger and redirecting stdout/stderr to file, and not only is there no content, the file I'm logging to isn't even created. The cron command I have is as follows: 0 0 * * * /sbin/btrfs subvolume snapshot / "/snapshots/$(date +%Y-%m-%d)" I've tried prefixing it with /bin/bash, but that makes no difference. What am I missing?

    Read the article

  • Practical way to set up an email inbox for testing?

    - by Ben Collins
    I need to test a high-volume email application. Up to now, I've just been using gmail ad-hoc aliases ([email protected]) to receive emails and see that the content is right. However, I now need to verify a recipient list, which means I need to get every single email that goes out on a particular test run. The problem with Google isn't clear, but there's some throttling somewhere (perhaps from SendGrid, who is my delivery provider), and only a very small number of those emails ever make it to my acount. So: what's a practical way to get where I want to be? Is there a way to get gmail to just accept everything? Is there a web app / service somewhere that will let me throw up a fake email address that can receive for a large number of recipients? Is there something else?

    Read the article

  • Web server replica not working in other server

    - by user761076
    I have a Drupal installation (php+mysql) in a server, and I'm trying to copy this installation to another server with the same configuration, same physical and virtual path, same db configuration, etc. The thing is, in my new server I get the homepage to work, but not the inner pages, so I guess has something to do with rewrite (mod_rewrite is installed) (both .htaccess are the same). When I access http://localhost/myweb/content/mypage I get a 404 or a "Forbidden" if I uncomment this in httpd.conf (original httpd.conf does not have this entry): <Directory path/to/docs"> DirectoryIndex index.php index.html Options Indexes FollowSymLinks AllowOverride None Order allow,deny Allow from all </Directory> Any clue? Thank you

    Read the article

  • JAVA Gui on Hello World [closed]

    - by user58892
    I am designing, implementing, testing, and debuging a GUI-based version of a “Hello, World!” program in a JFrame that includes a JLabel that reads “Hello, World!” and I am trying to use a layout manager, and an Exit button to close the program. Here's what I have so far, I would really apreciate if you could help on it syntax. I am 90% done but tried hard and it couldn't run. import java.awt.*; // Needed for flow layout manager import javax.swing.*; //All swing components live in the javax.swing package import javax.swing.JButton; //to recognize buttons import javax.swing.JFrame; import javax.swing.JPanel; import javax.swing.JTextField; public class HelloWorld { public static void main(String[] args) { //creates the label. The JLabel constructor //takes an optional argument which set the text of the label /* The text will be aligned with the center of the frame * otherwise it will align on the left. */ JLabel label= new JLabel("Hello World!"); new FlowWindow(); label.setHorizontalAlignment (SwingConstants.CENTER); JFrame frame = new JFrame("Hello"); //create exit button JButton button1 = new JButton("Exit"); //Add exit button to the content pane. add(button1); frame.add(label); frame.setSize(300, 300); frame.setVisible(true); frame.setLocationRelativeTo(null); frame.toFront(); } public static void FlowWindow() { //Add a new FlowLayout()); frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); } }

    Read the article

  • How to write PowerShell code part 3 (calling external script)

    - by ybbest
    In this post, I’d like to show you how to calling external script from a PowerShell script. I’d like to use the site creation script as an example. You can download script here. 1. To call the external script, you need to first to grab the script path. You can do so by calling $scriptPath = Split-Path $myInvocation.MyCommand.Path to grab the current script path. You can then use this to build the path for your external script path. $scriptPath = Split-Path $myInvocation.MyCommand.Path $ExternalScript=$scriptPath+"\CreateSiteCollection.ps1" $configurationXmlPath=$scriptPath+"\SiteCollection.xml" [xml] $configurationXml=Get-Content $configurationXmlPath & "$ExternalScript" $configurationXml Write-Host 2.If you like to pass in any parameters , you need to define your script parameters in param () at the top of the script and separate each parameter by a comma (,) and when calling the method you do not need comma (,) to separate each parameter. #Pass in the Parameters. param ([xml] $xmlinput)

    Read the article

  • Trac/SVN to DVCS Migration

    - by quanticle
    The project I'm currently working on is using Trac, with SVN integration. It's worked great until now. Now, however, we've taken on some additional developers and we're running into issues with branching and merging. Because of this, I think a move to a distributed version control system is in order. The problem is that Trac is very closely integrated with the SVN repository. We have tight integration between the tickets and the revision numbers of code changes corresponding to those tickets. In addition we have a support wiki that has a lot of data that helps the tech. support team. Is there a way we can migrate to git or mercurial without losing the benefits of Trac? I've looked at the git plugin for Trac, and I'm unsure of how well it works. Has anyone here used it with a project that's been migrated from SVN? EDIT: I should note that the most important priority for us is maintaining the links between Trac tickets and the corresponding changesets in SVN. That's a tool that we use every day, and it provides an easy way to jump to code changes when reviewing tickets. Wiki migration would be nice to have, but if it's not possible, we can continue to run the old system whilst we write some kind of a one-off script to migrate the content.

    Read the article

  • people_dl_import shows millions of records

    - by amit lohogaonkar
    We have a situation now on prod in sharepoint 2007 based intranet platform and it shows thousands of records under people_dl_import category with format spsimport://?$$dl$$/domain1/domain2/domain3/ Also import was not stopping and added millions of records in database and was on verge of disk full. On other servers like dev we have very less data in this category and format is also like spsimport://doaminname?$$dl$$?... which is good and has only 6000 rows and in prod its 2 millions Crawled under people_dl_import category. I need to know the cause of this garbage data and how to fix it. I tried resetting content source and I will do full import in this weekend to see if this garbage data gets cleared. Any idea on cause for thiss issue?

    Read the article

  • How to factorize code in Unreal Kismet (i.e. "Material Function"s for Kismet)

    - by Georges Dupéron
    In the Unreal Development Kit, when using the Material Editor, one can factorize frequently-used groups of nodes by creating a Material Function (content Browser ? right-click ? new matrial function, IIRC). When defining the behaviour of some actor in Kismet, one can easily have a dozen nodes involved. If I have many actors that share the same behaviour, then I'll copy-paste these nodes, and change the variables so they point to the other actors. This leads to inconsistencies (a modification in the behaviour of an actor isn't propagated in the copy-pasted nodes), complexity (you end up with hundreds of nodes), and generally useless effort. My question is : Can I create a "kismet function", just like a material function ? Note: I'd rather avoid using UnrealScript. I don't even know where to type UnrealScripts, don't know where the documentation is and more generally don't have enough time to invest in learning UnrealScript. This "kismet function" feature must be usable by graphists (with little programming knowledge). If a (simple) script suffices to add this feature in the Kismet editor, so that one can create several "functions" without using UnrealScript, then fine, but I don't really want to have to write a script each time I want to factorize a few nodes. Thanks for any information !

    Read the article

< Previous Page | 557 558 559 560 561 562 563 564 565 566 567 568  | Next Page >