Search Results

Search found 61297 results on 2452 pages for 'open files'.

Page 84/2452 | < Previous Page | 80 81 82 83 84 85 86 87 88 89 90 91  | Next Page >

  • Ubuntu 11.10 boot: xhost: unable to open display

    - by paulus_almighty
    I've had this papercut for a while now, it's time it was fixed. When I boot up Ubuntu, choosing "Ubuntu...generic" from the grub screen, Ubuntu fails to load. It just sits at the driver/module loading screen. What seems to be the most significant line in this output is "xhost: unable to open display" If I choose "Ubuntu...(recovery mode)" from grub then it loads OK. I don't get why this is. Out of interest I tried enabling boot error logging with #/etc/default/bootlogd BOOTLOGD_ENABLE=Yes but I'm not seeing anything in that file. ETA: I've had this problem since fresh install of 11.10. Here's lshw: $ sudo lshw -C display *-display description: VGA compatible controller product: GF104 [GeForce GTX 460] vendor: nVidia Corporation physical id: 0 bus info: pci@0000:03:00.0 version: a1 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress vga_controller bus_master cap_list rom configuration: driver=nvidia latency=0 resources: irq:16 memory:f6000000-f7ffffff memory:e0000000-e7ffffff memory:ec000000-efffffff ioport:bf00(size=128) memory:e8000000-e807ffff

    Read the article

  • Access to my files on Android

    - by user18644
    I am thinking of subscribing to Dropbox which is slightly more costly than Ubuntu one but I need access to my files on the go, and I prefer to use my smartphone to my netbook most of the time as I like to travel light. I do not want to stream music, I want access to my files only. Whereas there is a free app for Dropbox to access said files, there isn't one for Ubuntu. I would be prepared to wait a while if you have got this in hand, have you actually given this any thought? Please tell me whether I should ignore Ubuntu One and link up with Dropbox?

    Read the article

  • Lock ups, crashing, transferring files using TrueCrypt with iSCSI

    - by Anthony
    I have looked into this error and it seems that it hasn't been discussed yet - or at least I can't find any information relating. I'm having issues transferring files, usually larger files over a couple of hundred MB. Here is the setup: QNAP 410 as iSCSI Target with multiple LUNs. (CRC is turned on (Data Digest and Header Digest) Server 2003 with iSCSI Initiator version 2.08 - build 3825 (I'm copying files from anothe machine to shares on Server 2003 = into TrueCrypt volume ergo onto the NAS) I have mounted the LUN and formatted it with TrueCrypt using NTFS (Full format, not a quick one). What happens is some files, mainly RAR/Compressed files, appear as if they copy but fail. I've tested this in a number of ways and can repeat the process every time. So I thought to check transfer over iSCSI without TrueCrypt in between, a plain NTFS format - no problem at all. So it would seem TrueCrypt is at least part of the problem here. I haven't tried copying directly from the server yet, I will try that. I also haven't tried it without CRC but fail to see how that would affect this. I will update with my findings later. In the meantime does anyone have any ideas as to what could be wrong? Thanks for your time. Update: I copied a set of files, the ones I was having issues with, to the server then from there I copied those into two places within the TrueCrypt volume (Mounted on the NAS). A seperate directory create in the root of the volume The same initial directory I was using in the first instance Both worked fine. So it now seems clear that this is a link between TrueCrypt, iSCSI and Windows Shares. I say this because I originally setup the whole system using TrueCrypt volume files, not iSCSI. I changed it as it didn't suit my requirements - day wasted as well. While I had this setup though I copied my entire file set to the volume files and all files copied without error - over the network, from a pc, to the server where TrueCrypt had the volume files mounted from the NAS. I didn't bother turning off CRC on the iSCSI system as I highly doubt that is the cause in light of this finding. So any ideas?

    Read the article

  • Deletion of SQL Profiler Trace files (.trc)

    - by Mark
    We've noticed a lot of .trc files in our SQL data folder (\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\Data) on our server. The date range for these files spans over one day and the total file size of all files together is about 21 gigs. I'd like to free up this space but I'm not sure if I can just delete the files manually through Windows Explorer or if I need to do anything in SQL, like run a command or script. Any ideas?

    Read the article

  • How to merge many text files data in databse

    - by Mirage
    i have around 100 text files. The files have questions and 3 choices. FIles are like below ab001.txt -- contains question ab001a.txt -- is the first choice ab001b.txt ---is second choice ab001c.txt --- is third choice There are thousnad files like this. now i want to insert them in sql or first may in excel like First columns questions and other three columns as answers First two characters are same for soom files , looks like it signifies osme category so around every 30 questioons have same first charaters Any ideas

    Read the article

  • Moving directories full of files over the top

    - by JavaRocky
    I took a backup of a directory which has a number directories and files inside them. Recently some files have gone missing. I would like to just move over the missing files. I prefer moving files instead of just copying as space is a premium on this particular box and the files are quite large. How can i achieve this?

    Read the article

  • Multiple [Stand-alone] zip files creation?

    - by im_chc
    How can I automatically zip a group of files into multiple zip files (say, 2mb in size for each file), and that each zip file is a stand-alone zip file? (i.e. not mult-volume zip files, that you can't lost any one of the files, otherwise you can't unzip) Is there any tools available to do so? Actually I just need to group the files into many groups, 2mb each etc, zipped or not zipped doesn't matter thx!

    Read the article

  • How to delete duplicate restored user files with "(2)" added (Win7)

    - by user332172
    How to delete duplicate restored user files with "(2)" added (Win7) I restored my user files on windows 7 system from the Win 7 backup. I selected the wrong restore option and all files were restored. Existing unchanged files were restored with the text string " (2)". Is there a way to write a batchfile or script to do this operation? Example file name: "01 lesson 1" "01 lesson 1 (2)" I want to delete all files which had " (2)" appended on restore.

    Read the article

  • How to Load Oracle Tables From Hadoop Tutorial (Part 5 - Leveraging Parallelism in OSCH)

    - by Bob Hanckel
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Using OSCH: Beyond Hello World In the previous post we discussed a “Hello World” example for OSCH focusing on the mechanics of getting a toy end-to-end example working. In this post we are going to talk about how to make it work for big data loads. We will explain how to optimize an OSCH external table for load, paying particular attention to Oracle’s DOP (degree of parallelism), the number of external table location files we use, and the number of HDFS files that make up the payload. We will provide some rules that serve as best practices when using OSCH. The assumption is that you have read the previous post and have some end to end OSCH external tables working and now you want to ramp up the size of the loads. Using OSCH External Tables for Access and Loading OSCH external tables are no different from any other Oracle external tables.  They can be used to access HDFS content using Oracle SQL: SELECT * FROM my_hdfs_external_table; or use the same SQL access to load a table in Oracle. INSERT INTO my_oracle_table SELECT * FROM my_hdfs_external_table; To speed up the load time, you will want to control the degree of parallelism (i.e. DOP) and add two SQL hints. ALTER SESSION FORCE PARALLEL DML PARALLEL  8; ALTER SESSION FORCE PARALLEL QUERY PARALLEL 8; INSERT /*+ append pq_distribute(my_oracle_table, none) */ INTO my_oracle_table SELECT * FROM my_hdfs_external_table; There are various ways of either hinting at what level of DOP you want to use.  The ALTER SESSION statements above force the issue assuming you (the user of the session) are allowed to assert the DOP (more on that in the next section).  Alternatively you could embed additional parallel hints directly into the INSERT and SELECT clause respectively. /*+ parallel(my_oracle_table,8) *//*+ parallel(my_hdfs_external_table,8) */ Note that the "append" hint lets you load a target table by reserving space above a given "high watermark" in storage and uses Direct Path load.  In other doesn't try to fill blocks that are already allocated and partially filled. It uses unallocated blocks.  It is an optimized way of loading a table without incurring the typical resource overhead associated with run-of-the-mill inserts.  The "pq_distribute" hint in this context unifies the INSERT and SELECT operators to make data flow during a load more efficient. Finally your target Oracle table should be defined with "NOLOGGING" and "PARALLEL" attributes.   The combination of the "NOLOGGING" and use of the "append" hint disables REDO logging, and its overhead.  The "PARALLEL" clause tells Oracle to try to use parallel execution when operating on the target table. Determine Your DOP It might feel natural to build your datasets in Hadoop, then afterwards figure out how to tune the OSCH external table definition, but you should start backwards. You should focus on Oracle database, specifically the DOP you want to use when loading (or accessing) HDFS content using external tables. The DOP in Oracle controls how many PQ slaves are launched in parallel when executing an external table. Typically the DOP is something you want to Oracle to control transparently, but for loading content from Hadoop with OSCH, it's something that you will want to control. Oracle computes the maximum DOP that can be used by an Oracle user. The maximum value that can be assigned is an integer value typically equal to the number of CPUs on your Oracle instances, times the number of cores per CPU, times the number of Oracle instances. For example, suppose you have a RAC environment with 2 Oracle instances. And suppose that each system has 2 CPUs with 32 cores. The maximum DOP would be 128 (i.e. 2*2*32). In point of fact if you are running on a production system, the maximum DOP you are allowed to use will be restricted by the Oracle DBA. This is because using a system maximum DOP can subsume all system resources on Oracle and starve anything else that is executing. Obviously on a production system where resources need to be shared 24x7, this can’t be allowed to happen. The use cases for being able to run OSCH with a maximum DOP are when you have exclusive access to all the resources on an Oracle system. This can be in situations when your are first seeding tables in a new Oracle database, or there is a time where normal activity in the production database can be safely taken off-line for a few hours to free up resources for a big incremental load. Using OSCH on high end machines (specifically Oracle Exadata and Oracle BDA cabled with Infiniband), this mode of operation can load up to 15TB per hour. The bottom line is that you should first figure out what DOP you will be allowed to run with by talking to the DBAs who manage the production system. You then use that number to derive the number of location files, and (optionally) the number of HDFS data files that you want to generate, assuming that is flexible. Rule 1: Find out the maximum DOP you will be allowed to use with OSCH on the target Oracle system Determining the Number of Location Files Let’s assume that the DBA told you that your maximum DOP was 8. You want the number of location files in your external table to be big enough to utilize all 8 PQ slaves, and you want them to represent equally balanced workloads. Remember location files in OSCH are metadata lists of HDFS files and are created using OSCH’s External Table tool. They also represent the workload size given to an individual Oracle PQ slave (i.e. a PQ slave is given one location file to process at a time, and only it will process the contents of the location file.) Rule 2: The size of the workload of a single location file (and the PQ slave that processes it) is the sum of the content size of the HDFS files it lists For example, if a location file lists 5 HDFS files which are each 100GB in size, the workload size for that location file is 500GB. The number of location files that you generate is something you control by providing a number as input to OSCH’s External Table tool. Rule 3: The number of location files chosen should be a small multiple of the DOP Each location file represents one workload for one PQ slave. So the goal is to keep all slaves busy and try to give them equivalent workloads. Obviously if you run with a DOP of 8 but have 5 location files, only five PQ slaves will have something to do and the other three will have nothing to do and will quietly exit. If you run with 9 location files, then the PQ slaves will pick up the first 8 location files, and assuming they have equal work loads, will finish up about the same time. But the first PQ slave to finish its job will then be rescheduled to process the ninth location file, potentially doubling the end to end processing time. So for this DOP using 8, 16, or 32 location files would be a good idea. Determining the Number of HDFS Files Let’s start with the next rule and then explain it: Rule 4: The number of HDFS files should try to be a multiple of the number of location files and try to be relatively the same size In our running example, the DOP is 8. This means that the number of location files should be a small multiple of 8. Remember that each location file represents a list of unique HDFS files to load, and that the sum of the files listed in each location file is a workload for one Oracle PQ slave. The OSCH External Table tool will look in an HDFS directory for a set of HDFS files to load.  It will generate N number of location files (where N is the value you gave to the tool). It will then try to divvy up the HDFS files and do its best to make sure the workload across location files is as balanced as possible. (The tool uses a greedy algorithm that grabs the biggest HDFS file and delegates it to a particular location file. It then looks for the next biggest file and puts in some other location file, and so on). The tools ability to balance is reduced if HDFS file sizes are grossly out of balance or are too few. For example suppose my DOP is 8 and the number of location files is 8. Suppose I have only 8 HDFS files, where one file is 900GB and the others are 100GB. When the tool tries to balance the load it will be forced to put the singleton 900GB into one location file, and put each of the 100GB files in the 7 remaining location files. The load balance skew is 9 to 1. One PQ slave will be working overtime, while the slacker PQ slaves are off enjoying happy hour. If however the total payload (1600 GB) were broken up into smaller HDFS files, the OSCH External Table tool would have an easier time generating a list where each workload for each location file is relatively the same.  Applying Rule 4 above to our DOP of 8, we could divide the workload into160 files that were approximately 10 GB in size.  For this scenario the OSCH External Table tool would populate each location file with 20 HDFS file references, and all location files would have similar workloads (approximately 200GB per location file.) As a rule, when the OSCH External Table tool has to deal with more and smaller files it will be able to create more balanced loads. How small should HDFS files get? Not so small that the HDFS open and close file overhead starts having a substantial impact. For our performance test system (Exadata/BDA with Infiniband), I compared three OSCH loads of 1 TiB. One load had 128 HDFS files living in 64 location files where each HDFS file was about 8GB. I then did the same load with 12800 files where each HDFS file was about 80MB size. The end to end load time was virtually the same. However when I got ridiculously small (i.e. 128000 files at about 8MB per file), it started to make an impact and slow down the load time. What happens if you break rules 3 or 4 above? Nothing draconian, everything will still function. You just won’t be taking full advantage of the generous DOP that was allocated to you by your friendly DBA. The key point of the rules articulated above is this: if you know that HDFS content is ultimately going to be loaded into Oracle using OSCH, it makes sense to chop them up into the right number of files roughly the same size, derived from the DOP that you expect to use for loading. Next Steps So far we have talked about OLH and OSCH as alternative models for loading. That’s not quite the whole story. They can be used together in a way that provides for more efficient OSCH loads and allows one to be more flexible about scheduling on a Hadoop cluster and an Oracle Database to perform load operations. The next lesson will talk about Oracle Data Pump files generated by OLH, and loaded using OSCH. It will also outline the pros and cons of using various load methods.  This will be followed up with a final tutorial lesson focusing on how to optimize OLH and OSCH for use on Oracle's engineered systems: specifically Exadata and the BDA. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;}

    Read the article

  • Can i use the open source rich text editor in my site ? what does this GNU do with that ?

    - by Shyju
    I want to use one rich text editor to one of my ASP page.I searched in internet and found that there are lot of open source items available like TinyMCE,FCK Editror,nice edit etc.. Can i put the same samples in my website.There is a GNU license associated with it.Can somebody interpret it for me to answer these questions 1 . Can i use it in my website without getting a permission from anyone ? 2 . Do i need maintain the same files always ? Can i make some customization and use it ? Customization is only the CSS changes its going to be. 3 . Do i need to put all the files as it is of the package which downloaded.because my download has samples which supports all languages. Do i need to put all those folders too From GNU license " if you distribute copies of such a program, whether gratis or for a fee, you must pass on to the recipients the same freedoms that you received. You must make sure that they, too, receive or can get the source code. And you must show them these terms so they know their rights."

    Read the article

  • Personal Technology – Excel Tip: Comparing Excel Files

    - by Pinal Dave
    This guest post is by Vinod Kumar. Vinod Kumar has worked with SQL Server extensively since joining the industry over a decade ago. Working on various versionsfrom SQL Server 7.0, Oracle 7.3 and other database technologies – he now works with the Microsoft Technology Center (MTC) as a Technology Architect. Let us read the blog post in Vinod’s own voice. I have been writing about Excel Tips over my blog and thought it would be great to share one interesting tips here as a guest blog here. Assume a situation where you want to compare multiple excel files. Here is a typical scenario I have encountered as a common activity. Assume you are sending an Excel file with tons of data, formulae and multiple sheets. Now you are requesting your colleague to validate the file and if required change content for correctness. After receiving the file from your colleague, now you want to know what changes were made by this person to your document. Now here is a cool new addition to Excel 2013 that can help you achieve this task. To get to this option, click the INQUIRE Tab. Incase you don’t have the INQUIRE Tab, check Options using INQUIRE blog. In that post, we discuss all the other options of INQUIRE tab. Once you are on the INQUIRE Tab, select “Compare Files” button as shown in the figure above. This brings a dialog as below. If you are on Windows 8 or Windows 7 OS, search for an application called “Spreadsheet Compare 2013”. Ultimately both the options lead us to the same application. If you are using the stand alone app, once the App initializes, click the “Compare files” options from the toolbar. Make sure to give two different Excel files as shown in the figure above. After selecting the Excel Sheets, you can see the Compare tool has a number of other options to play from. We will talk about some of them later in this post. Just below our toolbar is a colorful side-by-side comparison of both our excel sheets. We can also see the various Tab’s from each file. There is a meaning for each of our color coding which will be discussed next. As you saw above, the color coding has a meaning. For example the bottom pane lists each of the color coding and most importantly each of the changes as compared side-by-side. The detailed information shown below can be exported using the “Export Results” options from the toolbar as a separate Excel Workbook or can be copied to clipboard to be used later. The final piece of the puzzle is to show a graphical view of these differences results based on each category. We cannot drill down per se, but this is a great way to know that the maximum changes seem to be based on “Cell Formats” and then few “Calculated Values” have changed. The INQUIRE option and Spreadsheet Compare 2013 tool is part of Excel 2013. So as you explore using the new version of Excel, there are many such hidden features that are worth exploring. Do let us know if you enjoyed learning a new feature today and I hope you will play around with this feature in your day-today challenges when working with Excel files. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL, Technology Tagged: Excel, Personal Technology

    Read the article

  • "type" Command Not Working As Expected on Git Bash

    - by trysis
    The type command, in Linux, returns the location, on the filesystem, of the given file, if it is in the current folder or the $PATH. This functionality is also available through Windows with the Git Bash command line program. The command also returns a file's location given the file without its extension (.exe, .vbs, etc.) However, I have run into what seems like a strange corner case where the file exists on the $PATH but doesn't get returned using the command. I am thinking of buying a new computer soon, so I looked up the method of transferring the license key from one computer to another, in preparation for actually doing this. The method I found mentioned the files slmgr.vbs and slui.exe, both of which reside in the C:/Windows\System32 folder, which is in my $PATH, as usual for a Windows computer. However, these two files aren't showing up when I use the type command. Also, neither gets executed when I call the files as commands without their extensions in Git Bash, and only slmgr.vbs gets executed when I call them with the extensions. Finally, slmgr.vbs is shown when listing the folder's contents in Git Bash, as well, but slui.exe isn't. I thought this might have to do with permissions, and, indeed, both files have very restrictive permissions, as you can see in the pictures below, but they both have the same permissions, which wouldn't explain why one gets executed and the other doesn't when called directly, nor why one file is listed on command line but the other isn't. C:\Windows\System32 folder, proving the files exist: File permissions for the Users and Administrators groups for the two files (they are identical): And the folder: type command and its output in Git Bash for the 2 files, plus listing the files in the folder (using grep to filter as the folder is huge), as well as listing part of the $PATH (keep in mind, for all these, that Git Bash changes the paths as they are displayed): Sean@MYPC ~ $ type -a slmgr sh.exe": type: slmgr: not found Sean@MYPC ~ $ type -a slmgr.vbs sh.exe": type: slmgr.vbs: not found Sean@MYPC ~ $ type -a slui sh.exe": type: slui: not found Sean@MYPC ~ $ type -a slui.exe sh.exe": type: slui.exe: not found Sean@MYPC ~ $ slmgr sh.exe": slmgr: command not found Sean@MYPC ~ $ slmgr.vbs /c/WINDOWS/system32/slmgr.vbs: line 2: syntax error near unexpected token `(' /c/WINDOWS/system32/slmgr.vbs: line 2: `' Copyright (c) Microsoft Corporation. A ll rights reserved.' Sean@MYPC ~ $ slui sh.exe": slui: command not found Sean@MYPC ~ $ slui.exe sh.exe": slui.exe: command not found Sean@MYPC ~ $ ls /c/Windows/System32/slui.exe /c/Windows/System32/slmgr.vbs ls: /c/Windows/System32/slui.exe: No such file or directory /c/Windows/System32/slmgr.vbs Sean@MYPC ~ $ echo $PATH /c/Users/Sean/bin:.:/usr/local/bin:/mingw/bin:/bin:/cmd:/c/Python33/:/c/Program Files (x86)/Intel/iCLS Client/:/c/Program Files/Intel/iCLS Client/:/c/WINDOWS/sy stem32:/c/WINDOWS:/c/WINDOWS/System32/Wbem:/c/WINDOWS/System32/WindowsPowerShell /v1.0/:/c/Program Files/Intel/Intel(R) Management Engine Components/DAL:/c/Progr am Files/Intel/Intel(R) Management Engine Components/IPT:/c/Program Files (x86)/ Intel/Intel(R) Management Engine Components/DAL:/c/Program Files (x86)/Intel/Int el(R) Management Engine Components/IPT:/c/Program Files/Intel/WiFi/bin/:/c/Progr am Files/Common Files/Intel/WirelessCommon/:/c/strawberry/c/bin:/c/strawberry/pe rl/site/bin:/c/strawberry/perl/bin:/c/Program Files (x86)/Microsoft ASP.NET/ASP. NET Web Pages/v1.0/:/c/Program Files/Microsoft SQL Server/110/Tools/Binn/:/c/Pro gram Files (x86)/Microsoft SQL Server/90/Tools/binn/:/c/Program Files (x86)/Open AFS/Common:/c/HashiCorp/Vagrant/bin:/c/Program Files (x86)/Windows Kits/8.1/Wind ows Performance Toolkit/:/c/Program Files/nodejs/:/c/Program Files (x86)/Git/cmd :/c/Program Files (x86)/Git/bin:/c/Program Files/Microsoft/Web Platform Installe r/:/c/Ruby200-x64/bin:/c/Users/Sean/AppData/Local/Box/Box Edit/:/c/Program Files (x86)/SSH Communications Security/SSH Secure Shell:/c/Users/Sean/Documents/Lisp :/c/Program Files/GCL-2.6.1/lib/gcl-2.6.1/unixport:/c/Chocolatey/bin:/c/Users/Se an/AppData/Roaming/npm:/c/wamp/bin/mysql/mysql5.6.12/bin:/c/Program Files/Oracle /VirtualBox:/c/Program Files/Java/jdk1.7.0_51/bin:/c/Program Files/Node-Growl:/c /chocolatey/bin:/c/Program Files/eclipse:/c/MongoDB/bin:/c/Program Files/7-Zip:/ c/Program Files (x86)/Google/Chrome/Application:/c/Program Files (x86)/LibreOffi ce 4/program:/c/Program Files (x86)/OpenOffice 4/program What's happening? Why aren't these files listed with the type command? Is this issue because of weird Windows permissions, or something even weirder? If permissions, why do they seem to have the same permissions, yet both are not handled in the same way?

    Read the article

  • apache2: Could not open configuration file /etc/apache2/apache2.conf: Permission denied

    - by AntonChanning
    I recently upgraded Ubuntu to the latest LTS edition on my work laptop, which I use as a LAMP development platform. The upgrade was from 12.4 to 14.4. Now I'm having trouble getting apache up and running again. Here is the output from an attempt: antonc@antonc-laptop:/etc/apache2$ sudo service apache2 restart * Restarting web server apache2 * The apache2 configtest failed. Output of config test was: apache2: Could not open configuration file /etc/apache2/apache2.conf: Permission denied Action 'configtest' failed. The Apache error log may have more information. Here is a list of permissions and ownership in /etc/apache, showing that apache2.conf is currently owned by root with permissions 644. I changed this temporarily to 777, but this made no difference, so I changed it back to 644. antonc@antonc-laptop:/etc/apache2$ ls -l total 80 -rw-r--r-- 1 root root 7115 Jan 7 2014 apache2.conf ... What do I need to do to get apache running again? Is the problem really with apache2.conf or some other setting? Should the conf file be owned by a user other than root?

    Read the article

  • Open Source WPF UML Design tool

    - by oazabir
    PlantUmlEditor is my new free open source UML designer project built using WPF and .NET 3.5. If you have used plantuml before, you know that you can quickly create sophisitcated UML diagrams without struggling with a designer. Especially those who use Visio to draw UML diagrams (God forbid!), you will be at heaven. This is a super fast way to get your diagrams up and ready for show. You can *write* UML diagrams in plain English, following a simple syntax and get diagrams generated on-the-fly. This editor really saves time designing UML diagrams. I have to produce quick diagrams to convey ideas quickly to Architects, Designers and Developers everyday. So, I use this tool to write some quick diagrams at the speed of coding, and the diagrams get generated on the fly. Instead of writing a long mail explaining some complex operation or some business process in English, I can quickly write it in the editor in almost plain English, and get a nice looking activity/sequence diagram generated instantly. Making major changes is also as easy as doing search-replace and copy-pasting blocks here and there. You don't get such agility in any conventional mouse-based UML designers. I have submited a full codeproject article to give you a detail walkthrough how I have built this. Please read this article and vote for me if you like it. PlantUML Editor: A fast and simple UML editor using WPF http://www.codeproject.com/KB/smart/plantumleditor.aspx You can download the project from here: http://code.google.com/p/plantumleditor/

    Read the article

  • OAF Page to Upload Files into Server from local Machine

    - by PRajkumar
    1. Create a New Workspace and Project File > New > General > Workspace Configured for Oracle Applications File Name – PrajkumarFileUploadDemo   Automatically a new OA Project will also be created   Project Name -- FileUploadDemo Default Package -- prajkumar.oracle.apps.fnd.fileuploaddemo   2. Create a New Application Module (AM) Right Click on FileUploadDemo > New > ADF Business Components > Application Module Name -- FileUploadAM Package -- prajkumar.oracle.apps.fnd.fileuploaddemo.server Check Application Module Class: FileUploadAMImpl Generate JavaFile(s)   3. Create a New Page Right click on FileUploadDemo > New > Web Tier > OA Components > Page Name -- FileUploadPG Package -- prajkumar.oracle.apps.fnd.fileuploaddemo.webui   4. Select the FileUploadPG and go to the strcuture pane where a default region has been created   5. Select region1 and set the following properties --     Attribute Property ID PageLayoutRN AM Definition prajkumar.oracle.apps.fnd.fileuploaddemo.server.FileUploadAM Window Title Uploading File into Server from Local Machine Demo Window Title Uploading File into Server from Local Machine Demo     6. Create Stack Layout Region Under Page Layout Region Right click PageLayoutRN > New > Region   Attribute Property ID MainRN AM Definition messageComponentLayout   7. Create a New Item messageFileUpload Bean under MainRN Right click on MainRN > New > messageFileUpload Set Following Properties for New Item --   Attribute Property ID MessageFileUpload Item Style messageFileUpload   8. Create a New Item Submit Button Bean under MainRN Right click on MainRN > New > messageLayout Set Following Properties for messageLayout --   Attribute Property ID ButtonLayout   Right Click on ButtonLayout > New > Item   Attribute Property ID Submit Item Style submitButton Attribute Set /oracle/apps/fnd/attributesets/Buttons/Go   9. Create Controller for page FileUploadPG Right Click on PageLayoutRN > Set New Controller Package Name: prajkumar.oracle.apps.fnd.fileuploaddemo.webui Class Name: FileUploadCO   Write Following Code in FileUploadCO processFormRequest   import oracle.cabo.ui.data.DataObject; import java.io.FileOutputStream; import java.io.InputStream; import oracle.jbo.domain.BlobDomain; import java.io.File; import oracle.apps.fnd.framework.OAException; public void processFormRequest(OAPageContext pageContext, OAWebBean webBean) { super.processFormRequest(pageContext, webBean);    if(pageContext.getParameter("Submit")!=null)  {   upLoadFile(pageContext,webBean);      } }   -- Use Following Code if want to Upload Files in Local Machine -- ----------------------------------------------------------------------------------- public void upLoadFile(OAPageContext pageContext,OAWebBean webBean) { String filePath = "D:\\PRajkumar";  System.out.println("Default File Path---->"+filePath);  String fileUrl = null;  try  {   DataObject fileUploadData =  pageContext.getNamedDataObject("MessageFileUpload"); //FileUploading is my MessageFileUpload Bean Id   if(fileUploadData!=null)   {    String uFileName = (String)fileUploadData.selectValue(null, "UPLOAD_FILE_NAME");  // include this line    String contentType = (String) fileUploadData.selectValue(null, "UPLOAD_FILE_MIME_TYPE");  // For Mime Type    System.out.println("User File Name---->"+uFileName);    FileOutputStream output = null;    InputStream input = null;    BlobDomain uploadedByteStream = (BlobDomain)fileUploadData.selectValue(null, uFileName);    System.out.println("uploadedByteStream---->"+uploadedByteStream);                               File file = new File("D:\\PRajkumar", uFileName);    System.out.println("File output---->"+file);    output = new FileOutputStream(file);    System.out.println("output----->"+output);    input = uploadedByteStream.getInputStream();    System.out.println("input---->"+input);    byte abyte0[] = new byte[0x19000];    int i;         while((i = input.read(abyte0)) > 0)    output.write(abyte0, 0, i);    output.close();    input.close();   }  }  catch(Exception ex)  {   throw new OAException(ex.getMessage(), OAException.ERROR);  }     }   -- Use Following Code if want to Upload File into Server -- ------------------------------------------------------------------------- public void upLoadFile(OAPageContext pageContext,OAWebBean webBean) { String filePath = "/u01/app/apnac03r12/PRajkumar/";  System.out.println("Default File Path---->"+filePath);  String fileUrl = null;  try  {   DataObject fileUploadData =  pageContext.getNamedDataObject("MessageFileUpload");  //FileUploading is my MessageFileUpload Bean Id     if(fileUploadData!=null)   {    String uFileName = (String)fileUploadData.selectValue(null, "UPLOAD_FILE_NAME");   // include this line    String contentType = (String) fileUploadData.selectValue(null, "UPLOAD_FILE_MIME_TYPE");   // For Mime Type    System.out.println("User File Name---->"+uFileName);    FileOutputStream output = null;    InputStream input = null;    BlobDomain uploadedByteStream = (BlobDomain)fileUploadData.selectValue(null, uFileName);    System.out.println("uploadedByteStream---->"+uploadedByteStream);                               File file = new File("/u01/app/apnac03r12/PRajkumar", uFileName);    System.out.println("File output---->"+file);    output = new FileOutputStream(file);    System.out.println("output----->"+output);    input = uploadedByteStream.getInputStream();    System.out.println("input---->"+input);    byte abyte0[] = new byte[0x19000];    int i;         while((i = input.read(abyte0)) > 0)    output.write(abyte0, 0, i);    output.close();    input.close();   }  }  catch(Exception ex)  {   throw new OAException(ex.getMessage(), OAException.ERROR);  }     }   10. Congratulation you have successfully finished. Run Your page and Test Your Work           -- Used Code to Upload files into Server   -- Before Upload files into Server     -- After Upload files into Server       -- Used Code to Upload files into Local Machine   -- Before Upload files into Local Machine       -- After Upload files into Local Machine

    Read the article

  • Workshop in Holland - and open questions

    - by Mike Dietrich
    Thanks to everybody visiting yesterday the Upgrade Workshop in Maarsen. I had lots of fun - and I hope you'd enjoy it, too :-) The slides, as always, can be downloaded from: http://apex.oracle.com/folien Use the Schluesselwort/Keyword: upgrade112 And thanks to all those of you sending feedback regarding "traget/destination" (will change it in the slides) and other topics such as Enterprise Manager Grid Control 11g. Enterprise Manager 11g will be launched on 22-APR-2010 - and you can join the event live if you will be accidentialy in New York:http://www.oracle.com/enterprisemanager11g/index.html Thanks for this hint!!! Regarding the open questions: Will there be PSUs available for Intel Solaris? PSUs will be made available on nearly all platforms including Intel Solaris. Please see Note:882604.1 for platform information and Note:854428.1 for direct links to the PSU download location. Is COMMIT_WRITE=NOWAIT the default in patch set 10.2.0.4? I tried to verify this and neither couldn't find a bug entry nor a documentation saying the 10.2.0.4 has a different default setting (default behaviour is WAIT). Checked it in my 10.2.0.4 instances as well and there it is set to WAIT. If this parameter is not explicitly specified, then database commit behavior defaults to writing commit records to disk before control is returned to the client. If only IMMEDIATE or BATCH is specified, but not WAIT or NOWAIT, then WAIT mode is assumed. If only WAIT or NOWAIT is specified, but not IMMEDIATE or BATCH, then IMMEDIATE mode is assumed Please feedback to me if you have different experiences. Service Request escalation by telephone? Thanks for this update - I didn't realize that ;-) Now I know why it hasn't helped last month when I've updated an SR ... here's the official information on that: Note:199389.1 - Note has been updated on 24-FEB-2010. See the telephone number to Oracle support to request an escalation here: http://www.oracle.com/support/contact.html

    Read the article

  • Advanced Oracle SOA Suite Oracle Open World 2012 SOA Presentations

    - by JuergenKress
    The list below only includes SOA presentations delivered or moderated by Oracle SOA Product Management. For a complete list of Oracle Open World 2012 presentations, please go here. Oracle SOA Suite, the Most Capable Tool for Every Possible Integration Challenge Using the Right Tools, Techniques, and Technologies for Integration Projects Administration and Management Essentials for Oracle SOA Suite 11g Extreme Performance and Scale Delivered by SOA on Oracle Exalogic Successful Application Integration and SOA Projects: Customer Panel How to Integrate Cloud Applications with Oracle SOA Suite Transforming the Utilities Industry with Oracle Fusion Middleware Cloud and On-Premises Applications Integration, Using Oracle Integration Adapters Delivering High Value B2B Gateways with Oracle SOA Suite 11g Implementing Successful Healthcare Applications with Oracle SOA Suite Migrating to Oracle SOA Suite: A Sun Java CAPS Customer Experience If Mobile Enablement Is on Your Mind, Oracle SOA Suite and Oracle Service Bus Can Help Building Shared Services Infrastructure with Oracle Service Bus: Customer Panel SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: OOW,OOW presentations,OOW soa ppt,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • The Open Data Protocol

    - by Bobby Diaz
    Well, day 2 of the MIX10 conference did not disappoint.  The keynote speakers introduced the preview release of IE9, which looks really cool and quick, and Visual Studio 2010 RC that is scheduled to RTM on April 12th.  It seemed to have a lot of improvements aimed at making developers more productive.  Here are the current links to these two offerings: Internet Explorer 9 – Platform Preview Visual Studio 2010 and .NET 4 – Release Candidate While both of these were interesting, the demos that really blew me away today centered around the work being done with The Open Data Protocol, or OData for short!  OData is a recommended standard being pushed by Microsoft that uses a REST based interface to interact with various types of data in a uniform manner.  Data producers then provide the data to consumer in either ATOM or JSON formats as requested by the client application. The OData SDK contains client and server libraries for many of the popular languages in use today, including .NET, Java, PHP, Objective C and JavaScript, so you consume or even produce your own OData services.  More information can be found using the following links: OData.org How to navigate an OData compliant service Query Functions (WCF Data Services) Netflix has made available one of the first live OData services by exposing their entire movie catalog.  You can browse and query using URLs similar to the following: http://odata.netflix.com/ http://odata.netflix.com/Catalog/Genres('Horror')/CatalogTitles http://odata.netflix.com/Catalog/CatalogTitles?$filter=startswith(Title/Regular,%20'Star%20Wars')&$orderby=Title/Regular So now I just need to find an excuse reason to start using OData in a real project! Enjoy!

    Read the article

  • HYUNDAI @ Oracle Open World 2012 General Session (GEN9449): Engineered Systems - From Vision to Game-Changing Results

    - by Sanjeev Sharma
     Why do data centers still demand an “assembly required” approach? This necessity  proves costly and complex, forces customers to deal with a wide range of vendors  for each  application, and fails to deliver performance optimization for application and data  workloads.  Oracle believes that systems (just like automobiles) should be designed and engineered “at the  factory” with the goal of reducing customers’ costs and complexity and delivering extreme performance, reliability, availability, and simplicity with a higher degree of automation. Hyundai Motor Company was founded in 1967 and since then has become a global brand in the automotive industry. Hyundai Motor Company’s was looking for a solution to manage its intellectual capital by capturing and facilitating re-use of knowledge of its thousands of employees. To achieve this Hyundai Motor Company set out to build a centralized document management platform that will allow its 30,000 knowledge workers to collaborate by sharing documents in a secure manner, anytime, anywhere. Furthermore this new knowledge management platform would bring about significant improvements in employee productivity.  Hear senior business leaders from Hyundai speak about the role and benefits of running their knowledge management platform on the Oracle family of engineered systems at the following general session at Oracle Open World 2012: Session: GEN9499 - General Session: Engineered Systems—From Vision to Game-Changing Results Date: Monday, 1 Oct, 2012Time: 1:45 pm - 2:45 pm (PST)Venue: Moscone West (2002 / 2004)

    Read the article

  • Oracle Open World 2013 - JD Edwards at Your Fingertips

    - by KemButller
    The Oracle & JD Edwards Universe at Your Fingertips!  Oracle Open World features thousands of sessions from which attendees can choose, including keynotes, technical sessions, demos, and hands-on labs. Hundreds of exhibitors will be on hand to share what they’re bringing to the leading edge of Oracle technology. You will have an infinite number of opportunities to network, trade information with peers, and gain insights from experts. For JD Edwards’ customers this valuable experience is twofold. Enjoy the convenience of attending the core JD Edwards’ program featured at the Intercontinental Hotel and experience the keynotes, educational sessions, networking events and partner solutions exhibited at the adjacent Moscone Convention Center.  Highlights for JD Edwards Customers:  Kickoff with the JD Edwards General Session, followed by product strategy road map sessions.  Select from over 60 educational sessions specifically applicable to JD Edwards.  Deepen your knowledge by attending the JD Edwards EnterpriseOne technical hands on lab sessions including: o One View Reporting – basic and advanced o EnterpriseOne Page Generator o User Interface Personalization o Configuring Composite Applications with Café One  Chose from thousands of educational sessions offered throughout the entire conference covering Oracle applications, industries, middleware, server and storage systems and database.  Meet the JD Edwards experts in the Oracle DEMOGrounds and get hands on experience with the latest and hottest features in Applications, Tools and Technologies, Mobility, In-Memory Applications, Health and Safety Incident Management, User Experience and Reporting.  Visit the JD Edwards Partner Pavilion at the Intercontinental Hotel featuring partner organizations with solutions for JD Edwards’ customers.  Meet with the Oracle JD Edwards Upgrade team during the conference as part of the Upgrade Care Program. Maximize your conference experience and leave with the information and contacts you need to turbo-charge your upgrade planning. Contact Barbara.canham-AT-oracle-DOT-com prior to the conference for more information.  Arrive on Sunday to participate in sessions presented by the Special Interest Groups of Quest International User Group. Oracle OpenWorld

    Read the article

  • MySQL Connect Call for Papers Open Now, until May 6

    - by Bertrand Matthelié
    @font-face { font-family: "Arial"; }@font-face { font-family: "Cambria"; }p.MsoNormal, li.MsoNormal, div.MsoNormal { margin: 0cm 0cm 0.0001pt; font-size: 12pt; font-family: "Times New Roman"; }a:link, span.MsoHyperlink { color: blue; text-decoration: underline; }a:visited, span.MsoHyperlinkFollowed { color: purple; text-decoration: underline; }div.Section1 { page: Section1; } MySQL Connect will take place in San Francisco September 29 and 30; you can read the Press Release here. The call for papers is open until May 6, submit your sessions now! This is your chance to present your real-world experience and share your expertise and best practices with the MySQL community. The conference includes six tracks: Performance and Scalability, High Availability, Cloud Computing, Architecture and Design, Database Administration, and Application Development. You can submit conference sessions as well as BOF (Birds-of-a-Feather) sessions. We look forward to hearing from you! Interested in sponsorship and exhibit opportunities? You will find more information here. Registration for MySQL Connect also opened today. Register now to take advantage of the Early Bird discount! MySQL Connect will be jam-packed with technical sessions, hands-on labs and Birds of a Feather (BOF) sessions delivered by MySQL community members, users, customers and MySQL engineers from Oracle. The event is a unique opportunity to learn about the latest MySQL features, discuss product roadmaps, and connect directly with the engineers behind the latest MySQL code.

    Read the article

  • Oracle Tuxedo at Oracle Open World 2012

    - by Deepak Goel
    Oracle Open World is almost here. There is quite a bit of Tuxedo to talk about at this year’s OOW. Primary focus will be on Tuxedo 12c, which was announced in August 2012 and is now generally available. Tuxedo 12c is a major release which many-many new and exciting features in almost all components of Tuxedo. You will not only hear about these features in various conference  sessions, you will also have an opportunity to see these features in action at demo grounds or play with these yourselves in hands-on-labs. Following is listing of Tuxedo related activities at OOW 2012: Conference Sessions Mon 1 Oct, 2012 10:45 AM - 11:45 AM, Oracle Tuxedo: What’s New in 12 c, Strategy, and Roadmap, Moscone South - 309 4:45 PM - 5:45 PM, Simplify Operations, Administration, and Management of Oracle Tuxedo Applications, Marriott Marquis - Golden Gate C3 Wed 3 Oct, 2012 3:30 PM - 4:30 PM, The Art and Practice of Mainframe Migration and Modernization, Moscone South - 309 Thu 4 Oct, 2012  2:15 PM - 3:15 PM, High-Performance, Scalable Enterprise Messaging for C/C++/COBOL Applications, Marriott Marquis - Salon 7 HOL (Hands-on Lab) Tues 2 Oct, 2012 5:00 PM - 6:00 PM Deploy, Manage, and Monitor Oracle Tuxedo Applications in the Enterprise Cloud, Marriott Marquis - Salon 3/4 Wed 3 Oct, 2012 1:15 PM - 2:15 PM, Develop C/C++ Applications for the Cloud with Oracle Tuxedo and Oracle Solaris Studio, Marriott Marquis - Salon 5/6 BOF (Birds-of-a-Feather) Mon 1 Oct, 2012 6:15 PM - 7:00 PM, Develop Scalable, Highly Available Enterprise Services in Java with Oracle Tuxedo, Marriott Marquis - Golden Gate C1 Demos Oracle Tuxedo: #1 Enterprise Cloud Platform for C/C++/COBOL Apps,  Moscone South, Right - S-215 Mainframe Rehost with Oracle Tuxedo Runtimes for CICS, IMS, and Batch, Moscone South, Right - S-216 Tuxedo Customer Appreciation Dinner Monday 1 Oct, 2012 7:30 PM - Please contact your Oracle Account Representative to attend. Limited seating. Deepak Goel Sr. Director, Software Development Oracle

    Read the article

  • Oracle Virtualbox does not open since upgrade

    - by Langjan
    After upgrading to Ubuntu 12.10, I have been unable to restart my Oracle virtualbox. jan@jan-System-Product-Name:~$ sudo /etc/init.d/vboxdrv setup sudo: /etc/init.d/vboxdrv: command not found Where do I go from here? Can anyone help, please? I tried to install virtualbox via these commands: echo "deb http://download.virtualbox.org/virtualbox/debian $(lsb_release -sc) contrib" | sudo tee /etc/apt/sources.list.d/virtualbox.list wget -q http://download.virtualbox.org/virtu...racle_vbox.asc -O- | sudo apt-key add - sudo apt-get update sudo apt-get install virtualbox-4.2 Attempts to install via package manager vbox would not start. These error reports are received: Because the USB 2.0 controller state is part of the saved VM state, the VM cannot be started.To fix this problem, either install the 'Oracle VM VirtualBox Extension Pack' or disable USB 2.0 support in the VM settings (VERR_NOT_FOUND). Result Code: NS_ERROR_FAILURE (0x80004005) Component: Console Interface: IConsole {db7ab4ca-2a3f-4183-9243-c1208da92392} I installed the extension pack, no change in result. I have added myself as user, but the error report says user must be added, when I redo add user, it says user is already added. The following outputs were also received:   Failed to open a session for the virtual machine Nuwe skelm. Implementation of the USB 2.0 controller not found! I cannot access the USB 2.0 setting to disable it. Where do I go from here, please?

    Read the article

  • "cannot open file system. File system seems damaged "

    - by suresh kadiri
    I was using windows 7 till yesterday. I tried to install ubuntu 14. 04 Lts version yesterday with in windows 7. But it was not succeeded. Then I decided to install ubuntu only. By mistake I installed ubuntu in whole disk. After that to get deleted partitions I installed testdisk. I also used deeper search option. Now I'm getting "file system damaged". It shows The hard disk (320GB /298 GiB) seems to small! (<473 GB /441 GB) Check the Harddisk size: HD Jumpers setings, BIOS detection... The following partitions can't be recovered: Partition start end size in sectrors Linux 19077 177 45 57604 81 13 618930716 Linux 19080 192 57 57607 96 25 618930716 With ubcd also I used testdisk option. Same result comes."cannot open file system. File system seems damaged ". I have all my stuff in hard disk. Please help me to get recover my files in deleted partitions.

    Read the article

  • Restore audio settings - cannot open mixer: No such file or directory

    - by Alfred M.
    The internal speaker of my laptop never functionned under Ubuntu. I tried to follow indication on the web and now the jack audio does not work either. The graphic interface for audio management now displays a 'dummy output' instead of the three possible outputs I used to have (one of them was working for the jack output). In a terminal alsamixer raises an error: cannot open mixer: No such file or directory I did try to remove and reinstall alsa-utils but it did not change anything. This happened after a failed atempt to install alsa-driver-linuxant_1.0.23.1_all.deb from here. My sound card seems to be not recognised anymore. After reboot I have no more the sound icon in menu bar the upper right corner. I think I have removed my sound card driver. Indeed, the command sudo lshw -class multimedia indicated audi device as unclaimed. Any idea how I could revert to a better situation (that is jack support and alsa working)? EDIT: The command lspci -nnk | grep -iEA3 audio gives lspci -nnk | grep -iEA3 audio 00:1b.0 Audio device [0403]: Intel Corporation 82801I (ICH9 Family) HD Audio Controller [8086:293e] (rev 03) Subsystem: ASUSTeK Computer Inc. Device [1043:1893] 00:1c.0 PCI bridge [0604]: Intel Corporation 82801I (ICH9 Family) PCI Express Port 1 [8086:2940] (rev 03)

    Read the article

< Previous Page | 80 81 82 83 84 85 86 87 88 89 90 91  | Next Page >