Search Results

Search found 70459 results on 2819 pages for 'file sync'.

Page 108/2819 | < Previous Page | 104 105 106 107 108 109 110 111 112 113 114 115  | Next Page >

  • Not seeing Sync Block in Object Layout

    - by bob-bedell
    It's my understanding the all .NET object instances begin with an 8 byte 'object header': a synch block (4 byte pointer into a SynchTableEntry table), and a type handle (4 byte pointer into the types method table). I'm not seeing this in VS 2010 RC's (CLR 4.0) debugger memory windows. Here's a simple class that will generate a 16 byte instance, less the object header. class Program { short myInt = 2; // 4 bytes long myLong = 3; // 8 bytes string myString = "aString"; // 4 byte object reference // 16 byte instance static void Main(string[] args) { new Program(); return; } } An SOS object dump tells me that the total object size is 24 bytes. That makes sense. My 16 byte instance plus an 8 byte object header. !DumpObj 0205b660 Name: Offset_Test.Program MethodTable: 000d383c EEClass: 000d13f8 Size: 24(0x18) bytes File: C:\Users\Bob\Desktop\Offset_Test\Offset_Test\bin\Debug\Offset_Test.exe Fields: MT Field Offset Type VT Attr Value Name 632020fc 4000001 10 System.Int16 1 instance 2 myInt 632050d8 4000002 4 System.Int64 1 instance 3 myLong 631fd2b8 4000003 c System.String 0 instance 0205b678 myString Here's the raw memory: 0x0205B660 000d383c 00000003 00000000 0205b678 00000002 ... And here are some annotations: offset 0 000d383c ;TypeHandle (pointer to MethodTable), 4 bytes offset 4 00000003 00000000 ;myLong, 8 bytes offset 12 0205b678 ;myString, 4 byte reference to address of "myString" on GC Heap offset 16 00000002 ;myInt, 4 bytes My object begins a address 0x0205B660. But I can only account for 20 bytes of it, the type handle and the instance fields. There is no sign of a synch block pointer. The object size is reported as 24 bytes, but the debugger is showing that it only occupies 20 bytes of memory. I'm reading Drill Into .NET Framework Internals to See How the CLR Creates Runtime Objects, and expected the first 4 bytes of my object to be a zeroed synch block pointer, as shown in Figure 8 of that article. Granted, this is an article about CLR 1.1. I'm just wondering if the difference between what I'm seeing and what this early article reports is a change in either the debugger's display of object layout, or in the way the CLR lays out objects in versions later than 1.1. Anyway, can anyone account for my 4 missing bytes?

    Read the article

  • How to upload a file into database by using Servlet?

    - by user1765496
    Hi all iam working on servlets, so i need to upload a file by using servlet as follows my code. package com.limrasoft.image.servlets; import java.io.*; import javax.servlet.*; import javax.servlet.http.*; import javax.servlet.annotation.*; import java.sql.*; @WebServlet(name="serv1",value="/s1") public class Account extends HttpServlet{ public void doPost(HttpServletRequest req,HttpServletResponse res)throws ServletException,IOException{ try{ Class.forName("oracle.jdbc.driver.OracleDriver"); Connecection con=null; try{ con=DriverManager.getConnection("jdbc:oracle:thin:@localhost:1521:xe","system","sajid"); PrintWriter pw=res.getWriter(); res.setContentType("text/html"); String s1=req.getParameter("un"); string s2=req.getParameter("pwd"); String s3=req.getParameter("g"); String s4=req.getParameter("uf"); PreparedStatement ps=con.prepareStatement("insert into account(?,?,?,?)"); ps.setString(1,s1); ps.setString(2,s2); ps.setString(3,s3); File file=new File("+s4+"); FileInputStream fis=new FileInputStream(fis); int len=(int)file.length(); ps.setBinaryStream(4,fis,len); int c=ps.executeUpdate(); if(c==0){pw.println("<h1>Registratin fail");} else{pw.println("<h1>Registration fail");} } finally{if(con!=null)con.close();} } catch(ClassNotFoundException ce){pw.println("<h1>Registration Fail");} catch(SQLException se){pw.println("<h1>Registration Fail");} pw.flush(); pw.close(); } } I have written the above code for file upload into database, but it giving error as "HTTP Status 500 - Servlet3.java (The system cannot find the file specified)" Could you plz help me to do this code,thanks in advanse.

    Read the article

  • WiX 3 Tutorial: Generating file/directory fragments with Heat.exe

    - by Mladen Prajdic
    In previous posts I’ve shown you our SuperForm test application solution structure and how the main wxs and wxi include file look like. In this post I’ll show you how to automate inclusion of files to install into your build process. For our SuperForm application we have a single exe to install. But in the real world we have 10s or 100s of different files from dll’s to resource files like pictures. It all depends on what kind of application you’re building. Writing a directory structure for so many files by hand is out of the question. What we need is an automated way to create this structure. Enter Heat.exe. Heat is a command line utility to harvest a file, directory, Visual Studio project, IIS website or performance counters. You might ask what harvesting means? Harvesting is converting a source (file, directory, …) into a component structure saved in a WiX fragment (a wxs) file. There are 2 options you can use: Create a static wxs fragment with Heat and include it in your project. The pro of this is that you can add or remove components by hand. The con is that you have to do the pro part by hand. Automation always beats manual labor. Run heat command line utility in a pre-build event of your WiX project. I prefer this way. By always recreating the whole fragment you don’t have to worry about missing any new files you add. The con of this is that you’ll include files that you otherwise might not want to. There is no perfect solution so pick one and deal with it. I prefer using the second way. A neat way of overcoming the con of the second option is to have a post-build event on your main application project (SuperForm.MainApp in our case) to copy the files needed to be installed in a special location and have the Heat.exe read them from there. I haven’t set this up for this tutorial and I’m simply including all files from the default SuperForm.MainApp \bin directory. Remember how we created a System Environment variable called SuperFormFilesDir? This is where we’ll use it for the first time. The command line text that you have to put into the pre-build event of your WiX project looks like this: "$(WIX)bin\heat.exe" dir "$(SuperFormFilesDir)" -cg SuperFormFiles -gg -scom -sreg -sfrag -srd -dr INSTALLLOCATION -var env.SuperFormFilesDir -out "$(ProjectDir)Fragments\FilesFragment.wxs" After you install WiX you’ll get the WIX environment variable. In the pre/post-build events environment variables are referenced like this: $(WIX). By using this you don’t have to think about the installation path of the WiX. Remember: for 32 bit applications Program files folder is named differently between 32 and 64 bit systems. $(ProjectDir) is obviously the path to your project and is a Visual Studio built in variable. You can view all Heat.exe options by running it without parameters but I’ll explain some that stick out the most. dir "$(SuperFormFilesDir)": tell Heat to harvest the whole directory at the set location. That is the location we’ve set in our System Environment variable. –cg SuperFormFiles: the name of the Component group that will be created. This name is included in out Feature tag as is seen in the previous post. -dr INSTALLLOCATION: the directory reference this fragment will fall under. You can see the top level directory structure in the previous post. -var env.SuperFormFilesDir: the name of the variable that will replace the SourceDir text that would otherwise appear in the fragment file. -out "$(ProjectDir)Fragments\FilesFragment.wxs": the full path and name under which the fragment file will be saved. If you have source control you have to include the FilesFragment.wxs into your project but remove its source control binding. The auto generated FilesFragment.wxs for our test app looks like this: <?xml version="1.0" encoding="utf-8"?><Wix xmlns="http://schemas.microsoft.com/wix/2006/wi"> <Fragment> <ComponentGroup Id="SuperFormFiles"> <ComponentRef Id="cmp5BB40DB822CAA7C5295227894A07502E" /> <ComponentRef Id="cmpCFD331F5E0E471FC42A1334A1098E144" /> <ComponentRef Id="cmp4614DD03D8974B7C1FC39E7B82F19574" /> <ComponentRef Id="cmpDF166522884E2454382277128BD866EC" /> </ComponentGroup> </Fragment> <Fragment> <DirectoryRef Id="INSTALLLOCATION"> <Component Id="cmp5BB40DB822CAA7C5295227894A07502E" Guid="{117E3352-2F0C-4E19-AD96-03D354751B8D}"> <File Id="filDCA561ABF8964292B6BC0D0726E8EFAD" KeyPath="yes" Source="$(env.SuperFormFilesDir)\SuperForm.MainApp.exe" /> </Component> <Component Id="cmpCFD331F5E0E471FC42A1334A1098E144" Guid="{369A2347-97DD-45CA-A4D1-62BB706EA329}"> <File Id="filA9BE65B2AB60F3CE41105364EDE33D27" KeyPath="yes" Source="$(env.SuperFormFilesDir)\SuperForm.MainApp.pdb" /> </Component> <Component Id="cmp4614DD03D8974B7C1FC39E7B82F19574" Guid="{3443EBE2-168F-4380-BC41-26D71A0DB1C7}"> <File Id="fil5102E75B91F3DAFA6F70DA57F4C126ED" KeyPath="yes" Source="$(env.SuperFormFilesDir)\SuperForm.MainApp.vshost.exe" /> </Component> <Component Id="cmpDF166522884E2454382277128BD866EC" Guid="{0C0F3D18-56EB-41FE-B0BD-FD2C131572DB}"> <File Id="filF7CA5083B4997E1DEC435554423E675C" KeyPath="yes" Source="$(env.SuperFormFilesDir)\SuperForm.MainApp.vshost.exe.manifest" /> </Component> </DirectoryRef> </Fragment></Wix> The $(env.SuperFormFilesDir) will be replaced at build time with the directory where the files to be installed are located. There is nothing too complicated about this. In the end it turns out that this sort of automation is great! There are a few other ways that Heat.exe can compose the wxs file but this is the one I prefer. It just seems the clearest. Play with its options to see what can it do. It’s one awesome little tool.   WiX 3 tutorial by Mladen Prajdic navigation WiX 3 Tutorial: Solution/Project structure and Dev resources WiX 3 Tutorial: Understanding main wxs and wxi file WiX 3 Tutorial: Generating file/directory fragments with Heat.exe

    Read the article

  • What was scientifically shown to support productivity when organizing/accessing file and folders?

    - by Tom Wijsman
    I have gathered terabytes of data but it has became a habit to store files and folders to the same folder, that folder could be kind of seen as a Inbox where most files (non-installations) enter my system. This way I end up with a big collections of files that are hard to organize properly, I mostly end up making folders that match their file type but then I still have several gigabytes of data per folder which doesn't make it efficient such that I can productively use the folder. I'd rather do a few clicks than having to search through the files, whether that's by some software product or by looking through the folder. Often the file names themselves are not proper so it would be easier to recognize them if there were few in a folder, rather than thousands of them. Scaling in the structure of directory trees in a computer cluster summarizes this problem as following: The processes of storing and retrieving information are rapidly gaining importance in science as well as society as a whole [1, 2, 3, 4]. A considerable effort is being undertaken, firstly to characterize and describe how publicly available information, for example in the world wide web, is actually organized, and secondly, to design efficient methods to access this information. [1] R. M. Shiffrin and K. B¨orner, Proc. Natl. Acad. Sci. USA 101, 5183 (2004). [2] S. Lawrence, C.L. Giles, Nature 400, 107–109 (1999). [3] R.F.I. Cancho and R.V. Sol, Proc. R. Soc. London, Ser. B 268, 2261 (2001). [4] M. Sigman and G. A. Cecchi, Proc. Natl. Acad. Sci. USA 99, 1742 (2002). It goes further on explaining how the data is usually organized by taking general looks at it, but by looking at the abstract and conclusion it doesn't come with a conclusion or approach which results in a productive organization of a directory hierarchy. So, in essence, this is a problem for which I haven't found a solution yet; and I would love to see a scientific solution to this problem. Upon searching further, I don't seem to find anything useful or free papers that approach this problem so it might be that I'm looking in the wrong place. I've also noted that there are different ways to term this problem, which leads out to different results of papers. Perhaps a paper is out there, but I'm not just using the same terms as that paper uses? They often use more scientific terms. I've once heard a story about an advocate with a laptop which has simply outperformed an advocate with had tons of papers, which shows how proper organization leads to productivity; but that story didn't share details on how the advocate used the laptop or how he had organized his data. But in any case, it was way more useful than how most of us organize our data these days... Advice me how I should organize my data, I'm not looking for suggestions here. I would love to see statistics or scientific measurement approaches that help me confirm that it does help me reach my goal.

    Read the article

  • Good and easy way to share files on local machine

    - by jb
    I would like to have a directory that has following properties: Many users can copy files into it These files can be deleted/changed by these users (user A can delete/modify file that was copied into this directory) it cant be done using normal file permissions (because permissions are retained on copy). Here is what I found on the net: brainstorm idea blueprint Some use cases: Sharing music on local machine Simple git repository sharing (just make a bare repository writeable to many people) --- i know that there are solutions like gitosis Allow many developers to modify test instance of php app without giving them root (i guess they would copy files) --- I'm leading a team of nonprofit junior developers and I need to keep that one simple! EDIT AFAIK setting SGID bit is not enugh, it only affects newly created files --- and basic workflow for these use cases ivnolves copying and other operations (which cleave file's gid unchanged)

    Read the article

  • How to play VOB files that were inside a DVD disc?

    - by Cristian
    I just downloaded a DVD which is for a movie. After uncompressing the DVD file I see .VOB, .IFO and .BUP files. If I open the first .VOB file it shows me the DVD menu but I can't interact with it. So, my question is... is Totem able to play those kind of files? If so, how can I achieve that? What other app could I use in order to reproduce those files? Edit Using VLC didn't work neither. I forgot to mention I have already tried that. Let me rephrase: if I open the first video file it shows the DVD menu, BUT I can't interact with it.

    Read the article

  • Sharing Files between Ubuntu 14.04 and Windows 8

    - by Matinn
    I have Ubuntu and Windows 8 installed on one System. I am trying to share files between these two operating systems using an NTFS Partition wich was created by Windows. I don't have trouble accessing the data on this partition from Ubuntu, however if i create a file in Ubuntu, this file doesn't show up when I boot into Windows. Does anyone know how to do this. From what I have read file sharing should work without installing any additional Software, as I am not trying to access the Linux ext4 Partition from Windows.

    Read the article

  • How do I re-install Samba?

    - by Sheldon
    I recently followed a guide to configure Samba but I couldn't get it configured properly. After realizing that the guide was six years out of date I thought I should start again. I reinstalled samba by first using these commands: sudo apt-get purge samba sudo apt-get install samba But after reading my configuration file (/etc/samba/smb.conf) I noticed that it was the same file, containing the same edits I had made. I then proceeded to delete the directory and then re-install samba again. However, the directory is not replaced after re-instillation and now I don't appear to have a configuration file. How do I get it back? Or install Samba correctly?

    Read the article

  • Windows 7, file properties - Is "date accessed" ALWAYS 100% accurate?

    - by Robert
    Hello, Here's the situation: I went on vacation for a couple of weeks, but before I left, I took the harddrive out of my computer and hid it in a different location. Upon coming back on Monday and putting the harddrive back in my computer, I right-clicked on different files to see their properties. Interestingly enough, several files had been accessed during the time I was gone! I right-clicked different files in various locations on the harddrive, and all of these suspect files had been accessed within a certain time range (Sunday, ?January ?09, ?2011, approximately ??between 6:52:16 PM - 7:16:25 PM). Some of them had been accessed at the exact same time--down to the very second. This makes me think that someone must have done a search on my harddrive for certain types of files and then copied all those files to some other medium. The Windows 7 installation on this harddrive is password protected, but NOT encrypted, so they could have easily put the harddrive into an enclosure/toaster to access it from a different computer. Of course I did not right-click every single file on my computer, but did so in different folders. For instance, one of the folders I went through has different types of files: .mp3, ,prproj, .3gp, .mpg, .wmv, .xmp, .txt with file-sizes ranging from 2 KB to 29.7 MB (there is also a sub-folder in this folder which contains only .jpg files); however, of all these different types of files in this folder and its subfolder, all of them had been accessed (including the .jpg files from the sub-folder) EXCEPT the .mp3 files (if it makes any difference, the .mp3 files in this folder range in size from 187 KB to 4881 KB). Additionally, this sub-folder which contained only .jpg files (48 .jpg files to be exact) was not accessed during this time--only the .jpg files within it were accessed-- (between 6:57:03 PM - 6:57:08 PM). I thought that perhaps this was some kind of Windows glitch that was displaying the wrong access date, but then I looked at the "date created" and "date modified" for all of these files in question, and their created/modified dates and times were spot on correct. My first thought was that someone put the harddrive into an enclosure/toaster and viewed the files; but then I realized that this was impossible because several of the files had been accessed at the same exact time down to the second. So this made me think that the only other way the "date accessed" could have changed would have been if someone copied the files. Is there any chance at all whatsoever that this is some kind of Windows glitch or something, or is it a fact that someone was indeed accessing my files (and if someone was accessing my files, am I right about the files in question having been copied)? Is there any other possibility for what could have happened? Do I need to use any kinds of forensics tools to further investigate this matter (and if so, which tools), or is there any other way in which I can be certain of what took place in that timeframe the day before I got back? Or is what I see with Windows 7 good enough (i.e. accurate and truthful)? Thanks in advance, and please let me know if any other details are required on my part.

    Read the article

  • Photo sharing social platform [closed]

    - by user1696497
    I am working on a photo sharing social platform like Flickr, Photobucket. To start off with I have half a million photos as of now. I want to convert all of these into a single format, compression ratio and use it as an original image. I will be storing original image, re-sized image according to layout and a thumbnail. I have started off with ruby, didn't find supporting libraries. I am considering python as it has a good image processing library and instagram is using it. I want some advise about how the image has to be processed while uploading, efficient way of storage whether database or a file system, image compressions, and precautions to be taken. I would be having profile pictures, do I need store them separately or along with the images? If I want to store the images on a file system, which file system should I use and also should I store the url or should I use any intermediate key value store like redis?

    Read the article

  • Is there a remote file transfer command that preserves nanosecond timestamps?

    - by Denver Gingerich
    I've tried transferring files using scp and rsync on Ubuntu 10.04, but neither of them preserves more than second precision. Here's an example: $ touch test1 $ scp -p test1 localhost:test2 $ ls -l --full-time test* -rw-r--r-- 1 user user 0 2011-01-14 18:46:06.579717282 -0500 test1 -rw-r--r-- 1 user user 0 2011-01-14 18:46:06.000000000 -0500 test2 $ cp -p test1 test2 $ ls -l --full-time test* -rw-r--r-- 1 user user 0 2011-01-14 18:46:06.579717282 -0500 test1 -rw-r--r-- 1 user user 0 2011-01-14 18:46:06.579717282 -0500 test2 $ A straight copy works fine, but scp truncates the timestamp. Are there any tools (preferably similar to scp or rsync in their usage) that do remote file transfers while preserving nanosecond timestamps? I could write a hacky script to do it, but I'd rather not.

    Read the article

  • Recommendations needed for email server and file sharing solutions.

    - by Abeansits
    I work at a relatively small company, around 30 people and we are now looking into a solution that can handle: File sharing. Email server. Calendar support. Around 30 users. Accessible from external network. Support for Windows XP (and above), Mac OS 10.6.3 and Ubuntu clients. When it comes down to security we don't have the experience to comment on that. I guess the de facto standard is good enough for us. Sorry if this is formulated as a n00b question, because it is. =) Any kind of pointer in the right direction will be appreciated. Thanks in advance! //Abean

    Read the article

  • Vim: Fuzzy file search comparable to Sublime Text 2's?

    - by Jed
    I've tried FuzzyFinder, Command-T, and Ctrl-P (which is my finder of choice right now), but none hold a candle to Sublime Text 2. For example, I want to type: Head.php and have it find, among others: app/code/core/Mage/Page/Block/Html/Head.php Currently in Ctrl-P, which has otherwise served me better than Command-T, searching for Head.php gives me these first: downloader/lib/Mage/Connect/Command/Config_Header.php app/code/local/Namespace/Modals/Helper/Reader.php app/code/core/Mage/XMLConnect/Helper/Ipad.php My file is nowhere to be found (and I've never opened any of the above files), so I have to type this instead: pagehtmlhead.php Is there any utility that does smarter scoring/matching?

    Read the article

  • What technical reasons exist for not using space characters in file names?

    - by Chris W. Rea
    Somebody I know expressed irritation today regarding those of us who tend not to use spaces in our filenames, e.g. NamingThingsLikeThis.txt -- despite most modern operating systems supporting spaces in filenames. Are there technical reasons that it's still common to see file names without (appropriate) spaces? If so, what are these technical reasons that spaces in filenames are avoided or discouraged, and in what circumstances are they relevant? The most obvious reason I could think of, and why I typically avoid it, are the extra quotes required on the command line when dealing with such files. Are there any other significant technical reasons?

    Read the article

  • What hash should be used to ensure file integrity?

    - by Corey Ogburn
    It's no secret that large files offered up for download often are coupled with their MD5 or SHA-1 hash so that after you download you can verify the file's integrity. Are these still the best algorithms to use for this? Obviously these are very popular hashes that potential downloaders would have easy access to. Ignoring that factor, what hashes have the best properties for being used for this? For example, bcrypt would be horrible for this. It's designed to be slow. That would suck to use on your 7.4 GB dual layer OS ISO you just downloaded when a 12 letter password might take up to a second with the right parameters.

    Read the article

  • Is there some way Linux editors can tell the programming language without the file extension?

    - by vfclists
    I am editing some scripts on Linux without the languages file extensions, and it seems that the editors, namely vi, nano and gedit are not applying syntax highlighting because the filenames don't use the language extensions. Is there some parameters to be passed or some setting that can enable them to recognize the language? Update: After some googling I realize that bash has that ability, at least to do some parsing or check the shebang at the top determine the language. By default Ubuntu does not install the complete vim package, so after installing it, the shell files are recognized. I don't know about nano or gedit, but vi and its graphical counterpart will do.

    Read the article

  • No file sharing between two server 2008 R2 machines.

    - by ProfKaos
    I have just replaced XP with Server 2008 R2 on my test sever, and have been running 2008 R2 on my dev laptop. When my server was still XP, file sharing just worked, but now it just doesn't. I've enabled everything I can about sharing, and I can ping the server by machine name, but if I try an access a share, I get asked for a password. The passowrd dialog assumes a domain for this user, but neither my laptop admin user nor my server admin user can get past this login. What am I doing wrong?

    Read the article

  • How can I view and sort after the page count for multiple PDF files in a Windows file explorer?

    - by grunwald2.0
    I unsuccessfully used the "pages" feature in Windows Explorer, as well as in Directory Opus 10 and Free Commander XT (which I installed just for that reason, to try it out) to display the page count of multiple PDFs in a folder. All my PDF's are free to edit, i.e. not write-protected. I don't understand why any PDF reader can display the (correct) page number, but none of the file explorers can? (In the "details" view of course.) The only documents whose page count is displayed are MS Word documents. Do I have to use Adobe Bridge? (I didn't try it.) On a side-note: Did that change in Windows 8? Initial research: Google search was unsuccessful, the only slightly related SE topic I found was "How to count pages in multiple PDF files?".

    Read the article

  • How to send a direccion to the void using the Hosts file and without using 127.0.0.1?

    - by magallanes
    I have some name address that i want to send straight to the void using the HOSTS file but i don't want to use the 127.0.0.1. How can i do that?. Why?, I want to speed up some proccess but 127.0.0.1 is serving a webserver, so if i use 127.0.0.1 then this process will call my webserver, consuming resources and may be delaying the process. Right now, i am using 0.0.0.0 instead of 127.0.0.1 but i am not sure if it is correct. 0.0.0.0 crl.microsoft.com

    Read the article

  • Why is the "file" command get confused on .py files?

    - by pythonic metaphor
    I have several python modules that I've written. Randomly, I used file on this directory, and I was really surprised by what I saw. Here's the resulting count of what it thought the files were: 1 ASCII Java program text, with very long lines 1 a /bin/env python script text executable 1 a python script text executable 2 ASCII C++ program text 4 ASCII English text 18 ASCII Java program text That's strange! Any idea what's going on or why it seems to think python modules are very often java files? I'm using CentOS 5.2.

    Read the article

  • How to start a cmd window and issue tail request in a bat file?

    - by Kari
    I can open a cmd window and start a tail by entering something like this: tail -f C:\Oracle\WebCenter\Sites\11gR1\Sites\11.1.1.6.1\logs\sites.log This is probably a stupid question, but how do I do this in a batch file? It should be easy but it's not working - I have tried a couple variations and no success. Can anyone tell me what I am doing wrong here? ECHO OFF CD C:\Oracle\WebCenter\Sites\11gR1\Sites\11.1.1.6.1\logs\ cmd tail -f sites.log I've also tried: ECHO OFF start cmd tail -f C:\Oracle\WebCenter\Sites\11gR1\Sites\11.1.1.6.1\logs\sites.log (am using Win7 Ultimate, on a 64-bit machine, if that has any bearing)

    Read the article

  • How can I retrieve the details of the file from an outbound operation in BPEL 11g

    - by [email protected]
    Several times, we come across requirements where we need to capture the details of the file that got written out as a part of a BPEL process invoking a File/Ftp Adapter. Consider a case where we're using FileNamingConvention as "PurchaseOrder_%SEQ%.txt" and we need to do some post processing based on the filename (please remember that we wouldn't know the filename until the adapter invocation completes) In order to achieve this, we need to manually tweak the WSDL so that the File/Ftp Adapter can return the metadata of the file that was written out. In general, the File/Ftp Write/Put WSDL operations are one way as shown below:         The File/Ftp Adapters are designed to return the metadata back if this WSDL is tweaked into a two-way WSDL. In addition, the <wsdl:output/> must import the fileread.xsd schema (see below). You will need to copy fileread.xsd from  here into the xsd folder of your composite.       Finally, we will need to tweak the  WSDL. (highlighted below)           Finally, the BPEL <invoke> would look as shown below. Please note that the file metadata would be returned as a part of the BPEL output variable:

    Read the article

  • Centralizing a resource file among multiple projects in one solution (C#/WPF)

    - by MarkPearl
    One of the challenges one faces when doing multi language support in WPF is when one has several projects in one solution (i.e. a business layer & ui layer) and you want multi language support. Typically each solution would have a resource file – meaning if you have 3 projects in a solution you will have 3 resource files.   For me this isn’t an ideal solution, as you normally want to send the resource files to a translator and the more resource files you have, the more fragmented the dictionary will be and the more complicated it will be for the translator. This can easily be overcome by creating a single project that just holds your translation resources and then exposing it to the other projects as a reference as explained in the following steps. Step 1 Step 1 -  Add a class library to your solution that will contain just the resource files. Your solution will now have an additional project as illustrated below. Step 2 Reference this project to the other projects. Step 3 Move all the resources from the other resource files to the translation projects resource file. Step 4 Set the translations projects resource files access modifier to public. Step 5 Reference all other projects to use the translation resource file instead of their local resource file. To do this in xaml you would need to expose the project as a namespace at the top of the xaml file… note that the example below is for a project called MaxCutLanguages – you need to put the correct project name in its place.   xmlns:MaxCutLanguages="clr-namespace:MaxCutLanguages;assembly=MaxCutLanguages"   And then in the actual xaml you need to replace any text with a reference to the resource file. <TextBlock Text="{x:Static MaxCutLanguages:Properties.Resources.HelloWorld}"/> End Result You can now delete all the resource files in the other projects as you now have one centralized resource file.

    Read the article

  • TransformXml Task locks config file identified in Source attribute

    - by alexhildyard
    As background: the TransformXml MSBuild task is typically invoked in a custom build step to mark up a web.config file with per-environment configuration; its flexible directives offer highly granular control over the insertion, removal, substitution and transformation of existing configuration hierarchies. For those using the TransformXML task (typically in a Web Deployment Project) I raised an issue against Visual Studio 2010, in which the file handle on the input file was not released, meaning that following transformation the source file remained locked. As a result, the best way to transform a file was first to rename it, transform it, and then copy it back, leaving the "locked" file to be freed up later.I just heard today that this has now been resolved in Visual Studio 2012 RTM. That's good news, because Web Config Transformations offer a lot. An intelligent, automated build process will swap in the relevant transform(s), making it much easier to synthesise the Developer and Build server builds. This makes for a simpler and more exemplary build process, and with the tighter coupling comes a correspondingly quicker response to Developmental change.Oh, and don't forget -- it isn't just web.configs you can transform. You can transform app.configs, or indeed any XML file that honours the task's schema and hierarchical rules.

    Read the article

  • c# Create a unique name with a GUID

    - by Dave Rook
    I am creating a back up solution. I doubt there is anything new in what I'm trying to achieve. Before copying the file I want to take a backup of the destination file in case anything becomes corrupt. This means renaming the file. I have to be careful when renaming in case the file already exists and adding a 01 to the end is not safe. My question is, based upon not finding the answer else where, would adding a GUID to the file name work. So, if my file was called file01.txt, renaming the file to file01.txtGUID (where GUID is the generated GUID), I could then perform my back up of that file (at this instance having 2 back ups) and then, after ensuring the file has copied (by comparing length of file to the source), delete the file with the GUID in the name. I know the GUID is not 100% guaranteed to be unique but would this suffice?

    Read the article

< Previous Page | 104 105 106 107 108 109 110 111 112 113 114 115  | Next Page >