Search Results

Search found 46487 results on 1860 pages for 'reading files'.

Page 40/1860 | < Previous Page | 36 37 38 39 40 41 42 43 44 45 46 47  | Next Page >

  • PowerShell script to find files that are consuming the most disk space

    As you know, SQL Server databases and backup files can take up a lot of disk space. When disk is running low and you need to troubleshoot disk space issues, the first thing to do is to find large files that are consuming disk space. In this article I will show you a PowerShell script that you can use to find large files on your disks. 12 essential tools for database professionalsThe SQL Developer Bundle contains 12 tools designed with the SQL Server developer and DBA in mind. Try it now.

    Read the article

  • System.getProperty("user.dir") cannot get my project root path ,but the path which my eclipse is located

    - by facebook-100005613813158
    As the title goes , I have class named GetException.java,inside it ,I read a xml file in a static code block like(Because this document is shared): static{ ... document = db.parse(new File(System.getProperty("user.dir")+"/src/exception/ExceptionCode.xml")); ... } To test if the file path is correct, I write a main function just inside GetException.java, it proves that the path is correct ,xml file can be read successfully. My project root dir is "/home/wuchang/workspace/MongodbI". But When this Class is loaded from other class,such as I called one of its static functions , it reports the error message: /home/mrs/??/eclipse/src/exception/ExceptionCode.xml (No such file or directory) /home/mrs/??/eclipse/ is actually my eclipse installation directory.So , I wander how System.getProperty("user.dir") returned the eclipse installation directory to me ,instead of my project root directory?

    Read the article

  • Files Not Uploading

    - by Howdy_McGee
    So I'm running wordpress. I will make changes to theme files and upload them successfully but none of the changes show up on the actual website. At first I thought it was the wrong theme but I have specific theme files I created that are in there so I'm using the right theme. Then I thought it was a server problem and maybe the companies server are down so I checked a few other websites and updated information just fine. All on linux servers. Then I jumped to Wordpress to make sure it wasn't a wordpress problem but I can update the files fine from their Admin Panel. I checked to make sure it wasn't Filezilla Caches or Browser Caches so I cleared them both. What could be the problem? If it had to deal with the Filezilla client permissions I imagine I would get an error upon uploading but it uploads just fine. Suggestions would be extremely helpful I have no clue.

    Read the article

  • Access to my files on Android

    - by user18644
    I am thinking of subscribing to Dropbox which is slightly more costly than Ubuntu one but I need access to my files on the go, and I prefer to use my smartphone to my netbook most of the time as I like to travel light. I do not want to stream music, I want access to my files only. Whereas there is a free app for Dropbox to access said files, there isn't one for Ubuntu. I would be prepared to wait a while if you have got this in hand, have you actually given this any thought? Please tell me whether I should ignore Ubuntu One and link up with Dropbox?

    Read the article

  • Lock ups, crashing, transferring files using TrueCrypt with iSCSI

    - by Anthony
    I have looked into this error and it seems that it hasn't been discussed yet - or at least I can't find any information relating. I'm having issues transferring files, usually larger files over a couple of hundred MB. Here is the setup: QNAP 410 as iSCSI Target with multiple LUNs. (CRC is turned on (Data Digest and Header Digest) Server 2003 with iSCSI Initiator version 2.08 - build 3825 (I'm copying files from anothe machine to shares on Server 2003 = into TrueCrypt volume ergo onto the NAS) I have mounted the LUN and formatted it with TrueCrypt using NTFS (Full format, not a quick one). What happens is some files, mainly RAR/Compressed files, appear as if they copy but fail. I've tested this in a number of ways and can repeat the process every time. So I thought to check transfer over iSCSI without TrueCrypt in between, a plain NTFS format - no problem at all. So it would seem TrueCrypt is at least part of the problem here. I haven't tried copying directly from the server yet, I will try that. I also haven't tried it without CRC but fail to see how that would affect this. I will update with my findings later. In the meantime does anyone have any ideas as to what could be wrong? Thanks for your time. Update: I copied a set of files, the ones I was having issues with, to the server then from there I copied those into two places within the TrueCrypt volume (Mounted on the NAS). A seperate directory create in the root of the volume The same initial directory I was using in the first instance Both worked fine. So it now seems clear that this is a link between TrueCrypt, iSCSI and Windows Shares. I say this because I originally setup the whole system using TrueCrypt volume files, not iSCSI. I changed it as it didn't suit my requirements - day wasted as well. While I had this setup though I copied my entire file set to the volume files and all files copied without error - over the network, from a pc, to the server where TrueCrypt had the volume files mounted from the NAS. I didn't bother turning off CRC on the iSCSI system as I highly doubt that is the cause in light of this finding. So any ideas?

    Read the article

  • Deletion of SQL Profiler Trace files (.trc)

    - by Mark
    We've noticed a lot of .trc files in our SQL data folder (\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\Data) on our server. The date range for these files spans over one day and the total file size of all files together is about 21 gigs. I'd like to free up this space but I'm not sure if I can just delete the files manually through Windows Explorer or if I need to do anything in SQL, like run a command or script. Any ideas?

    Read the article

  • How to merge many text files data in databse

    - by Mirage
    i have around 100 text files. The files have questions and 3 choices. FIles are like below ab001.txt -- contains question ab001a.txt -- is the first choice ab001b.txt ---is second choice ab001c.txt --- is third choice There are thousnad files like this. now i want to insert them in sql or first may in excel like First columns questions and other three columns as answers First two characters are same for soom files , looks like it signifies osme category so around every 30 questioons have same first charaters Any ideas

    Read the article

  • Moving directories full of files over the top

    - by JavaRocky
    I took a backup of a directory which has a number directories and files inside them. Recently some files have gone missing. I would like to just move over the missing files. I prefer moving files instead of just copying as space is a premium on this particular box and the files are quite large. How can i achieve this?

    Read the article

  • Multiple [Stand-alone] zip files creation?

    - by im_chc
    How can I automatically zip a group of files into multiple zip files (say, 2mb in size for each file), and that each zip file is a stand-alone zip file? (i.e. not mult-volume zip files, that you can't lost any one of the files, otherwise you can't unzip) Is there any tools available to do so? Actually I just need to group the files into many groups, 2mb each etc, zipped or not zipped doesn't matter thx!

    Read the article

  • How to delete duplicate restored user files with "(2)" added (Win7)

    - by user332172
    How to delete duplicate restored user files with "(2)" added (Win7) I restored my user files on windows 7 system from the Win 7 backup. I selected the wrong restore option and all files were restored. Existing unchanged files were restored with the text string " (2)". Is there a way to write a batchfile or script to do this operation? Example file name: "01 lesson 1" "01 lesson 1 (2)" I want to delete all files which had " (2)" appended on restore.

    Read the article

  • How to Load Oracle Tables From Hadoop Tutorial (Part 5 - Leveraging Parallelism in OSCH)

    - by Bob Hanckel
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Using OSCH: Beyond Hello World In the previous post we discussed a “Hello World” example for OSCH focusing on the mechanics of getting a toy end-to-end example working. In this post we are going to talk about how to make it work for big data loads. We will explain how to optimize an OSCH external table for load, paying particular attention to Oracle’s DOP (degree of parallelism), the number of external table location files we use, and the number of HDFS files that make up the payload. We will provide some rules that serve as best practices when using OSCH. The assumption is that you have read the previous post and have some end to end OSCH external tables working and now you want to ramp up the size of the loads. Using OSCH External Tables for Access and Loading OSCH external tables are no different from any other Oracle external tables.  They can be used to access HDFS content using Oracle SQL: SELECT * FROM my_hdfs_external_table; or use the same SQL access to load a table in Oracle. INSERT INTO my_oracle_table SELECT * FROM my_hdfs_external_table; To speed up the load time, you will want to control the degree of parallelism (i.e. DOP) and add two SQL hints. ALTER SESSION FORCE PARALLEL DML PARALLEL  8; ALTER SESSION FORCE PARALLEL QUERY PARALLEL 8; INSERT /*+ append pq_distribute(my_oracle_table, none) */ INTO my_oracle_table SELECT * FROM my_hdfs_external_table; There are various ways of either hinting at what level of DOP you want to use.  The ALTER SESSION statements above force the issue assuming you (the user of the session) are allowed to assert the DOP (more on that in the next section).  Alternatively you could embed additional parallel hints directly into the INSERT and SELECT clause respectively. /*+ parallel(my_oracle_table,8) *//*+ parallel(my_hdfs_external_table,8) */ Note that the "append" hint lets you load a target table by reserving space above a given "high watermark" in storage and uses Direct Path load.  In other doesn't try to fill blocks that are already allocated and partially filled. It uses unallocated blocks.  It is an optimized way of loading a table without incurring the typical resource overhead associated with run-of-the-mill inserts.  The "pq_distribute" hint in this context unifies the INSERT and SELECT operators to make data flow during a load more efficient. Finally your target Oracle table should be defined with "NOLOGGING" and "PARALLEL" attributes.   The combination of the "NOLOGGING" and use of the "append" hint disables REDO logging, and its overhead.  The "PARALLEL" clause tells Oracle to try to use parallel execution when operating on the target table. Determine Your DOP It might feel natural to build your datasets in Hadoop, then afterwards figure out how to tune the OSCH external table definition, but you should start backwards. You should focus on Oracle database, specifically the DOP you want to use when loading (or accessing) HDFS content using external tables. The DOP in Oracle controls how many PQ slaves are launched in parallel when executing an external table. Typically the DOP is something you want to Oracle to control transparently, but for loading content from Hadoop with OSCH, it's something that you will want to control. Oracle computes the maximum DOP that can be used by an Oracle user. The maximum value that can be assigned is an integer value typically equal to the number of CPUs on your Oracle instances, times the number of cores per CPU, times the number of Oracle instances. For example, suppose you have a RAC environment with 2 Oracle instances. And suppose that each system has 2 CPUs with 32 cores. The maximum DOP would be 128 (i.e. 2*2*32). In point of fact if you are running on a production system, the maximum DOP you are allowed to use will be restricted by the Oracle DBA. This is because using a system maximum DOP can subsume all system resources on Oracle and starve anything else that is executing. Obviously on a production system where resources need to be shared 24x7, this can’t be allowed to happen. The use cases for being able to run OSCH with a maximum DOP are when you have exclusive access to all the resources on an Oracle system. This can be in situations when your are first seeding tables in a new Oracle database, or there is a time where normal activity in the production database can be safely taken off-line for a few hours to free up resources for a big incremental load. Using OSCH on high end machines (specifically Oracle Exadata and Oracle BDA cabled with Infiniband), this mode of operation can load up to 15TB per hour. The bottom line is that you should first figure out what DOP you will be allowed to run with by talking to the DBAs who manage the production system. You then use that number to derive the number of location files, and (optionally) the number of HDFS data files that you want to generate, assuming that is flexible. Rule 1: Find out the maximum DOP you will be allowed to use with OSCH on the target Oracle system Determining the Number of Location Files Let’s assume that the DBA told you that your maximum DOP was 8. You want the number of location files in your external table to be big enough to utilize all 8 PQ slaves, and you want them to represent equally balanced workloads. Remember location files in OSCH are metadata lists of HDFS files and are created using OSCH’s External Table tool. They also represent the workload size given to an individual Oracle PQ slave (i.e. a PQ slave is given one location file to process at a time, and only it will process the contents of the location file.) Rule 2: The size of the workload of a single location file (and the PQ slave that processes it) is the sum of the content size of the HDFS files it lists For example, if a location file lists 5 HDFS files which are each 100GB in size, the workload size for that location file is 500GB. The number of location files that you generate is something you control by providing a number as input to OSCH’s External Table tool. Rule 3: The number of location files chosen should be a small multiple of the DOP Each location file represents one workload for one PQ slave. So the goal is to keep all slaves busy and try to give them equivalent workloads. Obviously if you run with a DOP of 8 but have 5 location files, only five PQ slaves will have something to do and the other three will have nothing to do and will quietly exit. If you run with 9 location files, then the PQ slaves will pick up the first 8 location files, and assuming they have equal work loads, will finish up about the same time. But the first PQ slave to finish its job will then be rescheduled to process the ninth location file, potentially doubling the end to end processing time. So for this DOP using 8, 16, or 32 location files would be a good idea. Determining the Number of HDFS Files Let’s start with the next rule and then explain it: Rule 4: The number of HDFS files should try to be a multiple of the number of location files and try to be relatively the same size In our running example, the DOP is 8. This means that the number of location files should be a small multiple of 8. Remember that each location file represents a list of unique HDFS files to load, and that the sum of the files listed in each location file is a workload for one Oracle PQ slave. The OSCH External Table tool will look in an HDFS directory for a set of HDFS files to load.  It will generate N number of location files (where N is the value you gave to the tool). It will then try to divvy up the HDFS files and do its best to make sure the workload across location files is as balanced as possible. (The tool uses a greedy algorithm that grabs the biggest HDFS file and delegates it to a particular location file. It then looks for the next biggest file and puts in some other location file, and so on). The tools ability to balance is reduced if HDFS file sizes are grossly out of balance or are too few. For example suppose my DOP is 8 and the number of location files is 8. Suppose I have only 8 HDFS files, where one file is 900GB and the others are 100GB. When the tool tries to balance the load it will be forced to put the singleton 900GB into one location file, and put each of the 100GB files in the 7 remaining location files. The load balance skew is 9 to 1. One PQ slave will be working overtime, while the slacker PQ slaves are off enjoying happy hour. If however the total payload (1600 GB) were broken up into smaller HDFS files, the OSCH External Table tool would have an easier time generating a list where each workload for each location file is relatively the same.  Applying Rule 4 above to our DOP of 8, we could divide the workload into160 files that were approximately 10 GB in size.  For this scenario the OSCH External Table tool would populate each location file with 20 HDFS file references, and all location files would have similar workloads (approximately 200GB per location file.) As a rule, when the OSCH External Table tool has to deal with more and smaller files it will be able to create more balanced loads. How small should HDFS files get? Not so small that the HDFS open and close file overhead starts having a substantial impact. For our performance test system (Exadata/BDA with Infiniband), I compared three OSCH loads of 1 TiB. One load had 128 HDFS files living in 64 location files where each HDFS file was about 8GB. I then did the same load with 12800 files where each HDFS file was about 80MB size. The end to end load time was virtually the same. However when I got ridiculously small (i.e. 128000 files at about 8MB per file), it started to make an impact and slow down the load time. What happens if you break rules 3 or 4 above? Nothing draconian, everything will still function. You just won’t be taking full advantage of the generous DOP that was allocated to you by your friendly DBA. The key point of the rules articulated above is this: if you know that HDFS content is ultimately going to be loaded into Oracle using OSCH, it makes sense to chop them up into the right number of files roughly the same size, derived from the DOP that you expect to use for loading. Next Steps So far we have talked about OLH and OSCH as alternative models for loading. That’s not quite the whole story. They can be used together in a way that provides for more efficient OSCH loads and allows one to be more flexible about scheduling on a Hadoop cluster and an Oracle Database to perform load operations. The next lesson will talk about Oracle Data Pump files generated by OLH, and loaded using OSCH. It will also outline the pros and cons of using various load methods.  This will be followed up with a final tutorial lesson focusing on how to optimize OLH and OSCH for use on Oracle's engineered systems: specifically Exadata and the BDA. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;}

    Read the article

  • skip reading headers in matlab

    - by Paul
    I had a similar question. but what i am trying now is to read files in .txt format into matlab. My problem is with the headers. Manytimes due to errors the system rewrites the headers in the middle of file and then matlab cannot read the file. IS there a way to skip it? I know i can skip reading some characters if i know what the character is. here is the code i am using. [c,pathc]=uigetfile({'*.txt'},'Select the data','V:\data); file=[pathc c]; data= dlmread(file, ',', 1,4); this way i let the user pick the file. My files are huge typically [ 86400 125 ] so naturally it has 125 header fields or more depends on files. Thanks Because the files are so big i cannot copy , but its in format like day time col1 col2 col3 col4 ............................... 2/3/2010 0:10 3.4 4.5 5.6 4.4 ............................... .................................................................. .................................................................. and so on

    Read the article

  • Personal Technology – Excel Tip: Comparing Excel Files

    - by Pinal Dave
    This guest post is by Vinod Kumar. Vinod Kumar has worked with SQL Server extensively since joining the industry over a decade ago. Working on various versionsfrom SQL Server 7.0, Oracle 7.3 and other database technologies – he now works with the Microsoft Technology Center (MTC) as a Technology Architect. Let us read the blog post in Vinod’s own voice. I have been writing about Excel Tips over my blog and thought it would be great to share one interesting tips here as a guest blog here. Assume a situation where you want to compare multiple excel files. Here is a typical scenario I have encountered as a common activity. Assume you are sending an Excel file with tons of data, formulae and multiple sheets. Now you are requesting your colleague to validate the file and if required change content for correctness. After receiving the file from your colleague, now you want to know what changes were made by this person to your document. Now here is a cool new addition to Excel 2013 that can help you achieve this task. To get to this option, click the INQUIRE Tab. Incase you don’t have the INQUIRE Tab, check Options using INQUIRE blog. In that post, we discuss all the other options of INQUIRE tab. Once you are on the INQUIRE Tab, select “Compare Files” button as shown in the figure above. This brings a dialog as below. If you are on Windows 8 or Windows 7 OS, search for an application called “Spreadsheet Compare 2013”. Ultimately both the options lead us to the same application. If you are using the stand alone app, once the App initializes, click the “Compare files” options from the toolbar. Make sure to give two different Excel files as shown in the figure above. After selecting the Excel Sheets, you can see the Compare tool has a number of other options to play from. We will talk about some of them later in this post. Just below our toolbar is a colorful side-by-side comparison of both our excel sheets. We can also see the various Tab’s from each file. There is a meaning for each of our color coding which will be discussed next. As you saw above, the color coding has a meaning. For example the bottom pane lists each of the color coding and most importantly each of the changes as compared side-by-side. The detailed information shown below can be exported using the “Export Results” options from the toolbar as a separate Excel Workbook or can be copied to clipboard to be used later. The final piece of the puzzle is to show a graphical view of these differences results based on each category. We cannot drill down per se, but this is a great way to know that the maximum changes seem to be based on “Cell Formats” and then few “Calculated Values” have changed. The INQUIRE option and Spreadsheet Compare 2013 tool is part of Excel 2013. So as you explore using the new version of Excel, there are many such hidden features that are worth exploring. Do let us know if you enjoyed learning a new feature today and I hope you will play around with this feature in your day-today challenges when working with Excel files. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL, Technology Tagged: Excel, Personal Technology

    Read the article

  • "type" Command Not Working As Expected on Git Bash

    - by trysis
    The type command, in Linux, returns the location, on the filesystem, of the given file, if it is in the current folder or the $PATH. This functionality is also available through Windows with the Git Bash command line program. The command also returns a file's location given the file without its extension (.exe, .vbs, etc.) However, I have run into what seems like a strange corner case where the file exists on the $PATH but doesn't get returned using the command. I am thinking of buying a new computer soon, so I looked up the method of transferring the license key from one computer to another, in preparation for actually doing this. The method I found mentioned the files slmgr.vbs and slui.exe, both of which reside in the C:/Windows\System32 folder, which is in my $PATH, as usual for a Windows computer. However, these two files aren't showing up when I use the type command. Also, neither gets executed when I call the files as commands without their extensions in Git Bash, and only slmgr.vbs gets executed when I call them with the extensions. Finally, slmgr.vbs is shown when listing the folder's contents in Git Bash, as well, but slui.exe isn't. I thought this might have to do with permissions, and, indeed, both files have very restrictive permissions, as you can see in the pictures below, but they both have the same permissions, which wouldn't explain why one gets executed and the other doesn't when called directly, nor why one file is listed on command line but the other isn't. C:\Windows\System32 folder, proving the files exist: File permissions for the Users and Administrators groups for the two files (they are identical): And the folder: type command and its output in Git Bash for the 2 files, plus listing the files in the folder (using grep to filter as the folder is huge), as well as listing part of the $PATH (keep in mind, for all these, that Git Bash changes the paths as they are displayed): Sean@MYPC ~ $ type -a slmgr sh.exe": type: slmgr: not found Sean@MYPC ~ $ type -a slmgr.vbs sh.exe": type: slmgr.vbs: not found Sean@MYPC ~ $ type -a slui sh.exe": type: slui: not found Sean@MYPC ~ $ type -a slui.exe sh.exe": type: slui.exe: not found Sean@MYPC ~ $ slmgr sh.exe": slmgr: command not found Sean@MYPC ~ $ slmgr.vbs /c/WINDOWS/system32/slmgr.vbs: line 2: syntax error near unexpected token `(' /c/WINDOWS/system32/slmgr.vbs: line 2: `' Copyright (c) Microsoft Corporation. A ll rights reserved.' Sean@MYPC ~ $ slui sh.exe": slui: command not found Sean@MYPC ~ $ slui.exe sh.exe": slui.exe: command not found Sean@MYPC ~ $ ls /c/Windows/System32/slui.exe /c/Windows/System32/slmgr.vbs ls: /c/Windows/System32/slui.exe: No such file or directory /c/Windows/System32/slmgr.vbs Sean@MYPC ~ $ echo $PATH /c/Users/Sean/bin:.:/usr/local/bin:/mingw/bin:/bin:/cmd:/c/Python33/:/c/Program Files (x86)/Intel/iCLS Client/:/c/Program Files/Intel/iCLS Client/:/c/WINDOWS/sy stem32:/c/WINDOWS:/c/WINDOWS/System32/Wbem:/c/WINDOWS/System32/WindowsPowerShell /v1.0/:/c/Program Files/Intel/Intel(R) Management Engine Components/DAL:/c/Progr am Files/Intel/Intel(R) Management Engine Components/IPT:/c/Program Files (x86)/ Intel/Intel(R) Management Engine Components/DAL:/c/Program Files (x86)/Intel/Int el(R) Management Engine Components/IPT:/c/Program Files/Intel/WiFi/bin/:/c/Progr am Files/Common Files/Intel/WirelessCommon/:/c/strawberry/c/bin:/c/strawberry/pe rl/site/bin:/c/strawberry/perl/bin:/c/Program Files (x86)/Microsoft ASP.NET/ASP. NET Web Pages/v1.0/:/c/Program Files/Microsoft SQL Server/110/Tools/Binn/:/c/Pro gram Files (x86)/Microsoft SQL Server/90/Tools/binn/:/c/Program Files (x86)/Open AFS/Common:/c/HashiCorp/Vagrant/bin:/c/Program Files (x86)/Windows Kits/8.1/Wind ows Performance Toolkit/:/c/Program Files/nodejs/:/c/Program Files (x86)/Git/cmd :/c/Program Files (x86)/Git/bin:/c/Program Files/Microsoft/Web Platform Installe r/:/c/Ruby200-x64/bin:/c/Users/Sean/AppData/Local/Box/Box Edit/:/c/Program Files (x86)/SSH Communications Security/SSH Secure Shell:/c/Users/Sean/Documents/Lisp :/c/Program Files/GCL-2.6.1/lib/gcl-2.6.1/unixport:/c/Chocolatey/bin:/c/Users/Se an/AppData/Roaming/npm:/c/wamp/bin/mysql/mysql5.6.12/bin:/c/Program Files/Oracle /VirtualBox:/c/Program Files/Java/jdk1.7.0_51/bin:/c/Program Files/Node-Growl:/c /chocolatey/bin:/c/Program Files/eclipse:/c/MongoDB/bin:/c/Program Files/7-Zip:/ c/Program Files (x86)/Google/Chrome/Application:/c/Program Files (x86)/LibreOffi ce 4/program:/c/Program Files (x86)/OpenOffice 4/program What's happening? Why aren't these files listed with the type command? Is this issue because of weird Windows permissions, or something even weirder? If permissions, why do they seem to have the same permissions, yet both are not handled in the same way?

    Read the article

  • OAF Page to Upload Files into Server from local Machine

    - by PRajkumar
    1. Create a New Workspace and Project File > New > General > Workspace Configured for Oracle Applications File Name – PrajkumarFileUploadDemo   Automatically a new OA Project will also be created   Project Name -- FileUploadDemo Default Package -- prajkumar.oracle.apps.fnd.fileuploaddemo   2. Create a New Application Module (AM) Right Click on FileUploadDemo > New > ADF Business Components > Application Module Name -- FileUploadAM Package -- prajkumar.oracle.apps.fnd.fileuploaddemo.server Check Application Module Class: FileUploadAMImpl Generate JavaFile(s)   3. Create a New Page Right click on FileUploadDemo > New > Web Tier > OA Components > Page Name -- FileUploadPG Package -- prajkumar.oracle.apps.fnd.fileuploaddemo.webui   4. Select the FileUploadPG and go to the strcuture pane where a default region has been created   5. Select region1 and set the following properties --     Attribute Property ID PageLayoutRN AM Definition prajkumar.oracle.apps.fnd.fileuploaddemo.server.FileUploadAM Window Title Uploading File into Server from Local Machine Demo Window Title Uploading File into Server from Local Machine Demo     6. Create Stack Layout Region Under Page Layout Region Right click PageLayoutRN > New > Region   Attribute Property ID MainRN AM Definition messageComponentLayout   7. Create a New Item messageFileUpload Bean under MainRN Right click on MainRN > New > messageFileUpload Set Following Properties for New Item --   Attribute Property ID MessageFileUpload Item Style messageFileUpload   8. Create a New Item Submit Button Bean under MainRN Right click on MainRN > New > messageLayout Set Following Properties for messageLayout --   Attribute Property ID ButtonLayout   Right Click on ButtonLayout > New > Item   Attribute Property ID Submit Item Style submitButton Attribute Set /oracle/apps/fnd/attributesets/Buttons/Go   9. Create Controller for page FileUploadPG Right Click on PageLayoutRN > Set New Controller Package Name: prajkumar.oracle.apps.fnd.fileuploaddemo.webui Class Name: FileUploadCO   Write Following Code in FileUploadCO processFormRequest   import oracle.cabo.ui.data.DataObject; import java.io.FileOutputStream; import java.io.InputStream; import oracle.jbo.domain.BlobDomain; import java.io.File; import oracle.apps.fnd.framework.OAException; public void processFormRequest(OAPageContext pageContext, OAWebBean webBean) { super.processFormRequest(pageContext, webBean);    if(pageContext.getParameter("Submit")!=null)  {   upLoadFile(pageContext,webBean);      } }   -- Use Following Code if want to Upload Files in Local Machine -- ----------------------------------------------------------------------------------- public void upLoadFile(OAPageContext pageContext,OAWebBean webBean) { String filePath = "D:\\PRajkumar";  System.out.println("Default File Path---->"+filePath);  String fileUrl = null;  try  {   DataObject fileUploadData =  pageContext.getNamedDataObject("MessageFileUpload"); //FileUploading is my MessageFileUpload Bean Id   if(fileUploadData!=null)   {    String uFileName = (String)fileUploadData.selectValue(null, "UPLOAD_FILE_NAME");  // include this line    String contentType = (String) fileUploadData.selectValue(null, "UPLOAD_FILE_MIME_TYPE");  // For Mime Type    System.out.println("User File Name---->"+uFileName);    FileOutputStream output = null;    InputStream input = null;    BlobDomain uploadedByteStream = (BlobDomain)fileUploadData.selectValue(null, uFileName);    System.out.println("uploadedByteStream---->"+uploadedByteStream);                               File file = new File("D:\\PRajkumar", uFileName);    System.out.println("File output---->"+file);    output = new FileOutputStream(file);    System.out.println("output----->"+output);    input = uploadedByteStream.getInputStream();    System.out.println("input---->"+input);    byte abyte0[] = new byte[0x19000];    int i;         while((i = input.read(abyte0)) > 0)    output.write(abyte0, 0, i);    output.close();    input.close();   }  }  catch(Exception ex)  {   throw new OAException(ex.getMessage(), OAException.ERROR);  }     }   -- Use Following Code if want to Upload File into Server -- ------------------------------------------------------------------------- public void upLoadFile(OAPageContext pageContext,OAWebBean webBean) { String filePath = "/u01/app/apnac03r12/PRajkumar/";  System.out.println("Default File Path---->"+filePath);  String fileUrl = null;  try  {   DataObject fileUploadData =  pageContext.getNamedDataObject("MessageFileUpload");  //FileUploading is my MessageFileUpload Bean Id     if(fileUploadData!=null)   {    String uFileName = (String)fileUploadData.selectValue(null, "UPLOAD_FILE_NAME");   // include this line    String contentType = (String) fileUploadData.selectValue(null, "UPLOAD_FILE_MIME_TYPE");   // For Mime Type    System.out.println("User File Name---->"+uFileName);    FileOutputStream output = null;    InputStream input = null;    BlobDomain uploadedByteStream = (BlobDomain)fileUploadData.selectValue(null, uFileName);    System.out.println("uploadedByteStream---->"+uploadedByteStream);                               File file = new File("/u01/app/apnac03r12/PRajkumar", uFileName);    System.out.println("File output---->"+file);    output = new FileOutputStream(file);    System.out.println("output----->"+output);    input = uploadedByteStream.getInputStream();    System.out.println("input---->"+input);    byte abyte0[] = new byte[0x19000];    int i;         while((i = input.read(abyte0)) > 0)    output.write(abyte0, 0, i);    output.close();    input.close();   }  }  catch(Exception ex)  {   throw new OAException(ex.getMessage(), OAException.ERROR);  }     }   10. Congratulation you have successfully finished. Run Your page and Test Your Work           -- Used Code to Upload files into Server   -- Before Upload files into Server     -- After Upload files into Server       -- Used Code to Upload files into Local Machine   -- Before Upload files into Local Machine       -- After Upload files into Local Machine

    Read the article

  • Java POI 3.6 XWPF usage guidelines (reading content of docx file)

    - by Mr CooL
    I assume the following objects should be used to read contents of DOCX file: XWPFDocument XWPFWordExtractor However, somewhere the compiler warns me from not including the correct libraries needed in classpath. I think I'm kinda lost for not knowing which jar file is the right one to include for this since there are so many jar files (POI libraries). My project so far involve in reading doc and docx files as part of the project. I've managed to read the contents of doc file. However, for docx file, I'm still having problem with that. Can anyone show the guidelines in terms of the codes and libraries needed (jar files) to read the content of docx file? I'm trying to limit the libraries need to be added on into project since I need to read doc and docx only. The following works for doc: fs = new POIFSFileSystem(new FileInputStream(fileName)); HWPFDocument doc = new HWPFDocument(fs); WordExtractor we = new WordExtractor(doc); String[] p = we.getParagraphText();

    Read the article

  • Removing offline/defunct files in SQL server 2008

    - by philox
    How to remove traces of files marked as OFFLINE or DEFUNCT in Microsoft SQL server 2008? I have been playing around with a setup where I create a database with 3 file-groups which are: Primary, FileGroupData and FileGroupIndex. The clustered index is using FileGroupData and a non-clustered index is set to use FileGroupIndex. To simulate a disk failure I've shut down SQL server and manually deleted the files in index file-group. To start the database I'll mark the files 'OFFLINE', but after that I can't delete the index files, which are now offline. I don't have backup of the files as they are merely indices, but that has the implication that I can't restore the files and have their status as "ONLINE". How would you recommend removing the files and the file-group as they still show up in management studio under files/file-groups. Management studio is not able to delete them. As far as I can tell this is different from the question posted in : http://stackoverflow.com/questions/462637/how-do-i-remove-offline-files-from-a-sql-server-2005-database /Philip

    Read the article

  • Maven assembly - Error reading assemblies

    - by Laurent
    Dear all, I have defined a personalized jar-with-dependencies assembly descriptor. However, when I execute it with mvn assembly:assembly, I get : ... [INFO] META-INF/ already added, skipping [INFO] META-INF/MANIFEST.MF already added, skipping [INFO] javax/ already added, skipping [INFO] META-INF/ already added, skipping [INFO] META-INF/MANIFEST.MF already added, skipping [INFO] META-INF/maven/ already added, skipping [INFO] [assembly:assembly {execution: default-cli}] [INFO] ------------------------------------------------------------------------ [ERROR] BUILD ERROR [INFO] ------------------------------------------------------------------------ [INFO] Error reading assemblies: No assembly descriptors found. My jar-with-dependencies.xml is in src/main/resources/assemblies/. My assembly descriptor is the following : <?xml version='1.0' encoding='UTF-8'?> <assembly> <id>jar-with-dependencies</id> <formats> <format>jar</format> </formats> <dependencySets> <dependencySet> <scope>runtime</scope> <unpack>true</unpack> <unpackOptions> <excludes> <exclude>**/LICENSE*</exclude> <exclude>**/README*</exclude> </excludes> </unpackOptions> </dependencySet> </dependencySets> <fileSets> <fileSet> <directory>${project.build.outputDirectory}</directory> <outputDirectory>/</outputDirectory> </fileSet> <fileSet> <directory>src/main/resources/META-INF/services</directory> <outputDirectory>META-INF/services</outputDirectory> </fileSet> </fileSets> </assembly> And my project pom.xml is : <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-assembly-plugin</artifactId> <version>2.2-beta-5</version> <executions> <execution> <id>jar-with-dependencies</id> <phase>package</phase> <goals> <goal>single</goal> </goals> <configuration> <descriptors> <descriptor>jar-with-dependencies.xml</descriptor> </descriptors> <archive> <manifest> <mainClass>org.my.app.HowTo</mainClass> </manifest> </archive> </configuration> </execution> </executions> </plugin> When mvn assembly:assembly is performed, dependencies are unpacked and I get the previous error when unpack has finished. Moreover, if I execute mvn -e assembly:assembly it is say that no descriptors has been found, however it try to unpack dependencies and a JAR with dependencies is created but it doesn't contain META-INF/services/* as specified in descriptor : [ERROR] BUILD ERROR [INFO] ------------------------------------------------------------------------ [INFO] Error reading assemblies: No assembly descriptors found. [INFO] ------------------------------------------------------------------------ [INFO] Trace org.apache.maven.lifecycle.LifecycleExecutionException: Error reading assemblies: No assembly descriptors found. at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoals(DefaultLifecycleExecutor.java:719) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeStandaloneGoal(DefaultLifecycleExecutor.java:569) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoal(DefaultLifecycleExecutor.java:539) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoalAndHandleFailures(DefaultLifecycleExecutor.java:387) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeTaskSegments(DefaultLifecycleExecutor.java:284) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.execute(DefaultLifecycleExecutor.java:180) at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:328) at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:138) at org.apache.maven.cli.MavenCli.main(MavenCli.java:362) at org.apache.maven.cli.compat.CompatibleMain.main(CompatibleMain.java:60) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.codehaus.classworlds.Launcher.launchEnhanced(Launcher.java:315) at org.codehaus.classworlds.Launcher.launch(Launcher.java:255) at org.codehaus.classworlds.Launcher.mainWithExitCode(Launcher.java:430) at org.codehaus.classworlds.Launcher.main(Launcher.java:375) Caused by: org.apache.maven.plugin.MojoExecutionException: Error reading assemblies: No assembly descriptors found. at org.apache.maven.plugin.assembly.mojos.AbstractAssemblyMojo.execute(AbstractAssemblyMojo.java:356) at org.apache.maven.plugin.DefaultPluginManager.executeMojo(DefaultPluginManager.java:490) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoals(DefaultLifecycleExecutor.java:694) ... 17 more Caused by: org.apache.maven.plugin.assembly.io.AssemblyReadException: No assembly descriptors found. at org.apache.maven.plugin.assembly.io.DefaultAssemblyReader.readAssemblies(DefaultAssemblyReader.java:206) at org.apache.maven.plugin.assembly.mojos.AbstractAssemblyMojo.execute(AbstractAssemblyMojo.java:352) ... 19 more I don't see my error. Does someone has a solution ? Kind Regards Laurent

    Read the article

  • Reading multiple Emacs info files simultaneously

    - by pajato0
    For reading programming (and other) documentation, the Emacs INFO mode is outstanding. So outstanding that I would like to be able to read say, the Emacs Lisp info file and the org-mode info files simultaneously without traversing back up to the beginning of the info tree. Either I've missed something obvious or I will need to hack some Emacs Lisp to achieve the goal. And yet again, someone may have already cracked this nut. So I guess my question is: what is the state of the practice for reading mulitple INFO files in Emacs simultaneously?

    Read the article

  • What do I need in order to extract and combine text files from multiple ZIP files, via command line?

    - by Iszi
    I've got an interesting scripting challenge in front of me. I'm fairly certain there's a way to do it, but I feel like I'm probably lacking some particular tools and/or functional knowledge. There's some fifty-plus ZIP files that each contain, among other things, text files that need to be merged with one another. The structure is something like this: C:\Reports\FirstJob-1.zip |-MyName |-FirstJob |-1 |-[Some other folders] |-TXTReports |-English |-[Some other files] |-Report.txt C:\Reports\FirstJob-2.zip |-MyName |-FirstJob |-1 |-[Some other folders] |-TXTReports |-English |-[Some other files] |-Report.txt C:\Reports\SecondJob-1.zip |-MyName |-SecondJob |-1 |-[Some other folders] |-TXTReports |-English |-[Some other files] |-Report.txt If I had all the Report.txt files in one regular folder, and uniquely named, I could probably just write a FOR statement that targets *.txt and runs something like type filename.txt >> Consolidated.txt on each. However, these all have the same file name and are embedded deep within separate ZIP files. The potentially useful tools I currently have at my disposal are Windows XP Professional SP3, PowerShell, and WinZip. I'd rather not download or install anything else, but I do understand that third-party tools (or additional tools from Microsoft or WinZip) may be necessary. Whatever tools I use should run natively in Windows. I really don't want to have to mess with Cygwin or other emulators on this system. At the very least, I need a tool that will allow me to analyze and manipulate ZIP files from the command line. Also, are there any other particular complications to this that I've not yet thought of?

    Read the article

  • Lucene Error While Reading binary block : java.io.EOFException

    - by tushar Khairnar
    Hi, I am getting java.io.EOFException while reading a binary block from lucene index. I am storing java object as byte-array in lucene index field and reading it when hit occurs. Here is stack trace : Caused by: java.io.EOFException at java.io.ObjectInputStream$PeekInputStream.readFully(ObjectInputStream.java:2281) at java.io.ObjectInputStream$BlockDataInputStream.readShort(ObjectInputStream.java:2750) at java.io.ObjectInputStream.readStreamHeader(ObjectInputStream.java:780) at java.io.ObjectInputStream.(ObjectInputStream.java:280) at org.terracotta.modules.searchable.util.SerializationUtil$OIS.(SerializationUtil.java:20) I have some background threads which write into index. But i buffer them and then write them at once like 1000. Occasionally I also issue optimize() on index. When I write, I am re-opening IndexReader. Does this is happening because of IndexReader re-opening call? Thanks. Regards Tushar

    Read the article

  • Reading HttpURLConnection InputStream - manual buffer or BufferedInputStream?

    - by stormin986
    When reading the InputStream of an HttpURLConnection, is there any reason to use one of the following over the other? I've seen both used in examples. Manual Buffer: while ((length = inputStream.read(buffer)) > 0) { os.write(buf, 0, ret); } BufferedInputStream is = http.getInputStream(); bis = new BufferedInputStream(is); ByteArrayBuffer baf = new ByteArrayBuffer(50); int current = 0; while ((current = bis.read()) != -1) { baf.append(current); } EDIT I'm still new to HTTP in general but one consideration that comes to mind is that if I am using a persistent HTTP connection, I can't just read until the input stream is empty right? In that case, wouldn't I need to read the message length and just read the input stream for that length? And similarly, if NOT using a persistent connection, is the code I included 100% good to go in terms of reading the stream properly?

    Read the article

  • Python File Meta Tag reading

    - by Jeff
    Anyone know of a Python module that can pull Tag data from multiple media formats? Trying to build an app that allows for manipulation of ASF (Windows Media Player files, ie WMA, WMV, etc), ID3, including both ID3v1 and ID3v2 (MPEG files, ie MP3), MPEG Audio Bit Stream (ie ABS, MP1, MP2, MP3), MPEG Program Stream (MPEG movies, and DVD and HD DVD video discs, ie MPG, MPEG, VOB, EVO), and ISO Base Media File Format (eg QuickTime, MPEG-4 and iTunes AAC files, ie QT, MOV, MP4, M4A, M4B, M4P, M4V, etc). Don't need ALL of that but just most standard consumer formats like mov and mpeg. I can't seem to find a good module to support that or a library. Any recommendations?

    Read the article

  • file reading in python

    - by Jagdev
    So my whole problem is that I have two files one with following format(for Python 2.6): #comments config = { #comments 'name': 'hello', 'see?': 'world':'ABC',CLASS=3 } This file has number of sections like this. Second file has format: [23] [config] 'name'='abc' 'see?'= [23] Now the requirement is that I need to compare both files and generate file as: #comments config = { #comments 'name': 'abc', 'see?': 'world':'ABC',CLASS=3 } So the result file will contain the values from the first file, unless the value for same attribute is there in second file, which will overwrite the value. Now my problem is how to manipulate these files using Python. Thanks in advance and for your previous answers in short time ,I need to use python 2.6

    Read the article

  • Reading Excel spreadsheets with Delphi

    - by Bruce McGee
    I need to read from and write to Excel spreadsheets using Delphi 2010. Nothing fancy. Just reading and writing values from specific cells and ranges on different sheets. Needs to work without having Excel installed and support Excel 2007. Some things I've looked at: I've tried using ADO, which works OK for selecting everything in an entire sheet, but I haven't had much luck reading specific cells or ranges. NativeExcel looked promising, but it doesn't seem to be in active development, and they don't respond to e-mails. Axolot has a couple of products. The main product seems to be very functional, but is pricey. They have a lite version, but it doesn't support Delphi 2010. Any recommendations? Free would be great, but I'm open to a commercial solution as long as it's reliable and well supported.

    Read the article

< Previous Page | 36 37 38 39 40 41 42 43 44 45 46 47  | Next Page >