Search Results

Search found 76641 results on 3066 pages for 'file format'.

Page 21/3066 | < Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >

  • Windows 2008 File Share

    - by user36540
    Hi, I have 3 Windows 2008 Standard servers in my system with no domain controller. Two of the servers are running a NLB cluster and the third server is a file server that the web servers connect to. I want to store my source code on the file server and point the IIS config to the network file share. The web sites also need access to a file share on the file server. I was able to share the network drive and access while logged into either of the web servers but my web apps are unable to access the file share - I assume due to permissions. Does anybody know the correct way to do this? Thanks, Chris

    Read the article

  • Fix Linux-made png file for use on Windows

    - by BGM
    There is a particular icon library that I really like. Now, I have downloaded the package that has the png files inside (I know the ico files are there, but I want the png files). However, about my Windows 7 computer tells me that about 1/3 of the png files are corrupt. I usually use XnView to view the files, and it won't display the "corrupt" files. I've tried other editors and viewers and I get the same issue. Now, the png package was originally designed for Linux to be an OS-icon-package for the entire system, so I figure the png files were built in Linux. So, is there a way I can "fix" the "corrupted" png files for my Windows 7 computer? Maybe when the files were created there was some bit that was off-colour or something? Any clues? [edit] I have read in this thread that the "corruption" could happen during the extraction process. I did all the extraction with 7-zip. It was a zip containing a tar. I will try another extractor, but I don't think it will make any difference.

    Read the article

  • Delete file then run file at startup

    - by Henry Gibson
    I'm running the music player Foobar2000 through Wine at startup. For some reason when I shutdown Ubuntu the Foobar2000 process is ended abnormally in Wine and when it runs next time I get an annoying "start in safe mode?" message. Not a huge problem, but I'd like it fixed. The safe mode message only appears if a file called "running" is present when Foobar2000 starts (if it isn't deleted when closed properly). So by deleting "running" then starting Foobar2000, the message doesn't appear. I thought it would be easy enough to enter this as a startup command, however it doesn't want to work. The command I am using is rm '/home/henry/.wine/drive_c/users/henry/Application Data/foobar2000/running';'/home/henry/.wine/drive_c/Program Files (x86)/foobar2000/foobar2000.exe' which works fine if I just run it from terminal, the file is deleted then foobar2000 runs. Does anyone know why this isn't working at startup? Also, will this run with a terminal visible? How can I make just the gui appear? Thanks

    Read the article

  • How to associate all file types within Wine with its corresponding native application?

    - by MestreLion
    This is easily done for a single file type, as answered in How to associate a file type within Wine with a native application?, by creating a .reg for the desired filetype. But this is for AVI only. I use some wine apps (uTorrent, Soulseek, Eudora, to name a few) that can launch a wide range of files. Email attachments, for example, can be JPG, DOC, PDF, PPS... its impossible (and not desirable) to track down all possible file types that one may receive in an email or download in a torrent. So I neeed a solution to be more generic and broad. I need the file association to honor whatever native app is currently configured. And I want this to be done for all file types configured in my system. I've already figured out how to make the solution generic. Simply replacing the launched app in .reg for winebrowser, like this: [HKEY_CLASSES_ROOT\.pdf] @="PDFfile" "Content Type"="application/pdf" [HKEY_CLASSES_ROOT\PDFfile\Shell\Open\command] @="C:\\windows\\system32\\winebrowser.exe \"%1\"" Ive tested this and it works correctly. Since winebrowser uses xdg-open as a backend, and converts my windows path to a Unix one, the correct (Linux) app is launched. So I need a "batch" updater to wine's registry, sort of a wine-update-associations script that I can run whenever a new app is installed. Maybe a tool that can: List all Mime Types types in my system that have a default, installed app associated Extract all the needed info (glob, mime type, etc) Generate the .REG file in the above format The tricky part is: i've searched a LOT to find info about how association is done in Ubuntu 10.10 onwards, and documentation is scarce and confusing, to say the least. Freedesktop.org has no complete spec, and even Gnome docs are obsolete. So far I've gathered 4 files that contain association info, but im clueless on which (or why) to use, or how to use them to generate the .reg file: ~/.local/share/applications/mimeapps.list ~/.local/share/applications/miminfo.cache /usr/share/applications/miminfo.cache /etc/gnome/defaults.list Any help, script or explanation would be greatly appreciated! Thanks!

    Read the article

  • RuntimeError: maximum recursion depth exceeded while calling a Python object

    - by Bilal Basharat
    this error arises when i try to run the following test case which is written in models.py of my django app named 'administration' : from django.test import Client, TestCase from django.core import mail class ClientTest( TestCase ): fixtures = [ 'testdata.json' ] def test_get_register( self ): response = self.client.get( '/accounts/register/', {} ) self.assertEqual( response.status_code, 200 ) the error arises at this line specifically: response = self.client.get( '/accounts/register/', {} ) my django version is 1.2.1 and python 2.6 and satchmo version is 0.9.2-pre hg-unknown. I code in windows platform(xp sp2). The command to run test case is: python manage.py test administration the complete error log is as follow: site = by_host(host = 'www.%s' % host, id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 121, in by_host site = by_host(host=host[4:], id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 124, in by_host site = by_host(host = 'www.%s' % host, id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 121, in by_host site = by_host(host=host[4:], id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 124, in by_host site = by_host(host = 'www.%s' % host, id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 121, in by_host site = by_host(host=host[4:], id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 124, in by_host site = by_host(host = 'www.%s' % host, id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 121, in by_host site = by_host(host=host[4:], id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 124, in by_host site = by_host(host = 'www.%s' % host, id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 121, in by_host site = by_host(host=host[4:], id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 124, in by_host site = by_host(host = 'www.%s' % host, id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 121, in by_host site = by_host(host=host[4:], id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 124, in by_host site = by_host(host = 'www.%s' % host, id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 121, in by_host site = by_host(host=host[4:], id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 124, in by_host site = by_host(host = 'www.%s' % host, id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 121, in by_host site = by_host(host=host[4:], id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 124, in by_host site = by_host(host = 'www.%s' % host, id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 121, in by_host site = by_host(host=host[4:], id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 124, in by_host site = by_host(host = 'www.%s' % host, id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 121, in by_host site = by_host(host=host[4:], id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 124, in by_host site = by_host(host = 'www.%s' % host, id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 121, in by_host site = by_host(host=host[4:], id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 124, in by_host site = by_host(host = 'www.%s' % host, id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 121, in by_host site = by_host(host=host[4:], id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 124, in by_host site = by_host(host = 'www.%s' % host, id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 121, in by_host site = by_host(host=host[4:], id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 124, in by_host site = by_host(host = 'www.%s' % host, id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 121, in by_host site = by_host(host=host[4:], id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 124, in by_host site = by_host(host = 'www.%s' % host, id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 121, in by_host site = by_host(host=host[4:], id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 124, in by_host site = by_host(host = 'www.%s' % host, id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 121, in by_host site = by_host(host=host[4:], id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 124, in by_host site = by_host(host = 'www.%s' % host, id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 121, in by_host site = by_host(host=host[4:], id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 124, in by_host site = by_host(host = 'www.%s' % host, id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 121, in by_host site = by_host(host=host[4:], id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 124, in by_host site = by_host(host = 'www.%s' % host, id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 121, in by_host site = by_host(host=host[4:], id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 124, in by_host site = by_host(host = 'www.%s' % host, id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 121, in by_host site = by_host(host=host[4:], id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 124, in by_host site = by_host(host = 'www.%s' % host, id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 121, in by_host site = by_host(host=host[4:], id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 124, in by_host site = by_host(host = 'www.%s' % host, id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 121, in by_host site = by_host(host=host[4:], id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 124, in by_host site = by_host(host = 'www.%s' % host, id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 121, in by_host site = by_host(host=host[4:], id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 124, in by_host site = by_host(host = 'www.%s' % host, id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 121, in by_host site = by_host(host=host[4:], id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 124, in by_host site = by_host(host = 'www.%s' % host, id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 121, in by_host site = by_host(host=host[4:], id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 124, in by_host site = by_host(host = 'www.%s' % host, id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 121, in by_host site = by_host(host=host[4:], id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 124, in by_host site = by_host(host = 'www.%s' % host, id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 121, in by_host site = by_host(host=host[4:], id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 124, in by_host site = by_host(host = 'www.%s' % host, id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 121, in by_host site = by_host(host=host[4:], id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 124, in by_host site = by_host(host = 'www.%s' % host, id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 121, in by_host site = by_host(host=host[4:], id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 124, in by_host site = by_host(host = 'www.%s' % host, id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 121, in by_host site = by_host(host=host[4:], id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 124, in by_host site = by_host(host = 'www.%s' % host, id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 121, in by_host site = by_host(host=host[4:], id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 124, in by_host site = by_host(host = 'www.%s' % host, id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 121, in by_host site = by_host(host=host[4:], id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 124, in by_host site = by_host(host = 'www.%s' % host, id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 121, in by_host site = by_host(host=host[4:], id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 124, in by_host site = by_host(host = 'www.%s' % host, id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 121, in by_host site = by_host(host=host[4:], id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 124, in by_host site = by_host(host = 'www.%s' % host, id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 121, in by_host site = by_host(host=host[4:], id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 124, in by_host site = by_host(host = 'www.%s' % host, id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 121, in by_host site = by_host(host=host[4:], id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 124, in by_host site = by_host(host = 'www.%s' % host, id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 121, in by_host site = by_host(host=host[4:], id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 124, in by_host site = by_host(host = 'www.%s' % host, id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 121, in by_host site = by_host(host=host[4:], id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 124, in by_host site = by_host(host = 'www.%s' % host, id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 121, in by_host site = by_host(host=host[4:], id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 124, in by_host site = by_host(host = 'www.%s' % host, id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 121, in by_host site = by_host(host=host[4:], id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 124, in by_host site = by_host(host = 'www.%s' % host, id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 121, in by_host site = by_host(host=host[4:], id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 124, in by_host site = by_host(host = 'www.%s' % host, id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 121, in by_host site = by_host(host=host[4:], id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 124, in by_host site = by_host(host = 'www.%s' % host, id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 101, in by_host site = Site.objects.get(domain=host) File "C:\django\django\db\models\manager.py", line 132, in get return self.get_query_set().get(*args, **kwargs) File "C:\django\django\db\models\query.py", line 336, in get num = len(clone) File "C:\django\django\db\models\query.py", line 81, in __len__ self._result_cache = list(self.iterator()) File "C:\django\django\db\models\query.py", line 269, in iterator for row in compiler.results_iter(): File "C:\django\django\db\models\sql\compiler.py", line 672, in results_iter for rows in self.execute_sql(MULTI): File "C:\django\django\db\models\sql\compiler.py", line 717, in execute_sql sql, params = self.as_sql() File "C:\django\django\db\models\sql\compiler.py", line 65, in as_sql where, w_params = self.query.where.as_sql(qn=qn, connection=self.connection) File "C:\django\django\db\models\sql\where.py", line 91, in as_sql sql, params = child.as_sql(qn=qn, connection=connection) File "C:\django\django\db\models\sql\where.py", line 94, in as_sql sql, params = self.make_atom(child, qn, connection) File "C:\django\django\db\models\sql\where.py", line 141, in make_atom lvalue, params = lvalue.process(lookup_type, params_or_value, connection) File "C:\django\django\db\models\sql\where.py", line 312, in process connection=connection, prepared=True) File "C:\django\django\db\models\fields\subclassing.py", line 53, in inner return func(*args, **kwargs) File "C:\django\django\db\models\fields\subclassing.py", line 53, in inner return func(*args, **kwargs) File "C:\django\django\db\models\fields\__init__.py", line 323, in get_db_prep _lookup return [self.get_db_prep_value(value, connection=connection, prepared=prepar ed)] File "C:\django\django\db\models\fields\subclassing.py", line 53, in inner return func(*args, **kwargs) File "C:\django\django\db\models\fields\subclassing.py", line 53, in inner return func(*args, **kwargs) RuntimeError: maximum recursion depth exceeded while calling a Python object ---------------------------------------------------------------------- Ran 7 tests in 48.453s FAILED (errors=1) Destroying test database 'default'...

    Read the article

  • Program for remove exact duplicate files while caching search results

    - by John Thomas
    We need a Windows 7 program to remove/check the duplicates but our situation is somewhat different than the standard one for which there are enough programs. We have a fairly large static archive (collection) of photos spread on several disks. Let's call them Disk A..M. We have also some disks (let's call them Disk 1..9) which contain some duplicates which are to be found on disks A..M. We want to add to our collection new disks (N, O, P... aso.) which will contain the photos from disks 1..9 but, of course, we don't want to have any photos two (or more) times. Of course, theoretically, the task can be solved with a regular file duplicate remover but the time needed will be very big. Ideally, AFAIS now, the real solution would be a program which will scan the disks A..M, store the file sizes/hashes of the photos in an indexed database/file(s) and will check the new disks (1..9) against this database. However I have hard time to find such a program (if exists). Other things to note: we consider that the Disks A..M (the collection) doesn't have any duplicates on them the file names might be changed we aren't interested in approximated (fuzzy) comparison which can be found in some photo comparing programs. We hunt for exact duplicate files. we aren't afraid of command line. :-) we need to work on Win7/XP we prefer (of course) to be freeware TIA for any suggestions, John Th.

    Read the article

  • Create and Share a File (Also a mysterious error)

    - by Kirk
    My goal is to create a XML file and then send it through the share Intent. I'm able to create a XML file using this code FileOutputStream outputStream = context.openFileOutput(fileName, Context.MODE_WORLD_READABLE); PrintStream printStream = new PrintStream(outputStream); String xml = this.writeXml(); // get XML here printStream.println(xml); printStream.close(); I'm stuck trying to retrieve a Uri to the output file in order to share it. I first tried to access the file by converting the file to a Uri File outFile = context.getFileStreamPath(fileName); return Uri.fromFile(outFile); This returns file:///data/data/com.my.package/files/myfile.xml but I cannot appear to attach this to an email, upload, etc. If I manually check the file length, it's proper and shows there is a reasonable file size. Next I created a content provider and tried to reference the file and it isn't a valid handle to the file. The ContentProvider doesn't ever seem to be called a any point. Uri uri = Uri.parse("content://" + CachedFileProvider.AUTHORITY + "/" + fileName); return uri; This returns content://com.my.package.provider/myfile.xml but I check the file and it's zero length. How do I access files properly? Do I need to create the file with the content provider? If so, how? Update Here is the code I'm using to share. If I select Gmail, it does show as an attachment but when I send it gives an error Couldn't show attachment and the email that arrives has no attachment. public void onClick(View view) { Log.d(TAG, "onClick " + view.getId()); switch (view.getId()) { case R.id.share_cancel: setResult(RESULT_CANCELED, getIntent()); finish(); break; case R.id.share_share: MyXml xml = new MyXml(); Uri uri; try { uri = xml.writeXmlToFile(getApplicationContext(), "myfile.xml"); //uri is "file:///data/data/com.my.package/files/myfile.xml" Log.d(TAG, "Share URI: " + uri.toString() + " path: " + uri.getPath()); File f = new File(uri.getPath()); Log.d(TAG, "File length: " + f.length()); // shows a valid file size Intent shareIntent = new Intent(); shareIntent.setAction(Intent.ACTION_SEND); shareIntent.putExtra(Intent.EXTRA_STREAM, uri); shareIntent.setType("text/plain"); startActivity(Intent.createChooser(shareIntent, "Share")); } catch (FileNotFoundException e) { e.printStackTrace(); } break; } } I noticed that there is an Exception thrown here from inside createChooser(...), but I can't figure out why it's thrown. E/ActivityThread(572): Activity com.android.internal.app.ChooserActivity has leaked IntentReceiver com.android.internal.app.ResolverActivity$1@4148d658 that was originally registered here. Are you missing a call to unregisterReceiver()? I've researched this error and can't find anything obvious. Both of these links suggest that I need to unregister a receiver. Chooser Activity Leak - Android Why does Intent.createChooser() need a BroadcastReceiver and how to implement? I have a receiver setup, but it's for an AlarmManager that is set elsewhere and doesn't require the app to register / unregister. Code for openFile(...) In case it's needed, here is the content provider I've created. public ParcelFileDescriptor openFile(Uri uri, String mode) throws FileNotFoundException { String fileLocation = getContext().getCacheDir() + "/" + uri.getLastPathSegment(); ParcelFileDescriptor pfd = ParcelFileDescriptor.open(new File(fileLocation), ParcelFileDescriptor.MODE_READ_ONLY); return pfd; }

    Read the article

  • SQL SERVER – Import CSV into Database – Transferring File Content into a Database Table using CSVexpress

    - by pinaldave
    One of the most common data integration tasks I run into is a desire to move data from a file into a database table.  Generally the user is familiar with his data, the structure of the file, and the database table, but is unfamiliar with data integration tools and therefore views this task as something that is difficult.  What these users really need is a point and click approach that minimizes the learning curve for the data integration tool.  This is what CSVexpress (www.CSVexpress.com) is all about!  It is based on expressor Studio, a data integration tool I’ve been reviewing over the last several months. With CSVexpress, moving data between data sources can be as simple as providing the database connection details, describing the structure of the incoming and outgoing data and then connecting two pre-programmed operators.   There’s no need to learn the intricacies of the data integration tool or to write code.  Let’s look at an example. Suppose I have a comma separated value data file with data similar to the following, which is a listing of terminated employees that includes their hiring and termination date, department, job description, and final salary. EMP_ID,STRT_DATE,END_DATE,JOB_ID,DEPT_ID,SALARY 102,13-JAN-93,24-JUL-98 17:00,Programmer,60,"$85,000" 101,21-SEP-89,27-OCT-93 17:00,Account Representative,110,"$65,000" 103,28-OCT-93,15-MAR-97 17:00,Account Manager,110,"$75,000" 304,17-FEB-96,19-DEC-99 17:00,Marketing,20,"$45,000" 333,24-MAR-98,31-DEC-99 17:00,Data Entry Clerk,50,"$35,000" 100,17-SEP-87,17-JUN-93 17:00,Administrative Assistant,90,"$40,000" 334,24-MAR-98,31-DEC-98 17:00,Sales Representative,80,"$40,000" 400,01-JAN-99,31-DEC-99 17:00,Sales Manager,80,"$55,000" Notice the concise format used for the date values, the fact that the termination date includes both date and time information, and that the salary is clearly identified as money by the dollar sign and digit grouping.  In moving this data to a database table I want to express the dates using a format that includes the century since it’s obvious that this listing could include employees who left the company in both the 20th and 21st centuries, and I want the salary to be stored as a decimal value without the currency symbol and grouping character.  Most data integration tools would require coding within a transformation operation to effect these changes, but not expressor Studio.  Directives for these modifications are included in the description of the incoming data. Besides starting the expressor Studio tool and opening a project, the first step is to create connection artifacts, which describe to expressor where data is stored.  For this example, two connection artifacts are required: a file connection, which encapsulates the file system location of my file; and a database connection, which encapsulates the database connection information.  With expressor Studio, I use wizards to create these artifacts. First click New Connection > File Connection in the Home tab of expressor Studio’s ribbon bar, which starts the File Connection wizard.  In the first window, I enter the path to the directory that contains the input file.  Note that the file connection artifact only specifies the file system location, not the name of the file. Then I click Next and enter a meaningful name for this connection artifact; clicking Finish closes the wizard and saves the artifact. To create the Database Connection artifact, I must know the location of, or instance name, of the target database and have the credentials of an account with sufficient privileges to write to the target table.  To use expressor Studio’s features to the fullest, this account should also have the authority to create a table. I click the New Connection > Database Connection in the Home tab of expressor Studio’s ribbon bar, which starts the Database Connection wizard.  expressor Studio includes high-performance drivers for many relational database management systems, so I can simply make a selection from the “Supplied database drivers” drop down control.  If my desired RDBMS isn’t listed, I can optionally use an existing ODBC DSN by selecting the “Existing DSN” radio button. In the following window, I enter the connection details.  With Microsoft SQL Server, I may choose to use Windows Authentication rather than rather than account credentials.  After clicking Next, I enter a meaningful name for this connection artifact and clicking Finish closes the wizard and saves the artifact. Now I create a schema artifact, which describes the structure of the file data.  When expressor reads a file, all data fields are typed as strings.  In some use cases this may be exactly what is needed and there is no need to edit the schema artifact.  But in this example, editing the schema artifact will be used to specify how the data should be transformed; that is, reformat the dates to include century designations, change the employee and job ID’s to integers, and convert the salary to a decimal value. Again a wizard is used to create the schema artifact.  I click New Schema > Delimited Schema in the Home tab of expressor Studio’s ribbon bar, which starts the Database Connection wizard.  In the first window, I click Get Data from File, which then displays a listing of the file connections in the project.  When I click on the file connection I previously created, a browse window opens to this file system location; I then select the file and click Open, which imports 10 lines from the file into the wizard. I now view the file’s content and confirm that the appropriate delimiter characters are selected in the “Field Delimiter” and “Record Delimiter” drop down controls; then I click Next. Since the input file includes a header row, I can easily indicate that fields in the file should be identified through the corresponding header value by clicking “Set All Names from Selected Row. “ Alternatively, I could enter a different identifier into the Field Details > Name text box.  I click Next and enter a meaningful name for this schema artifact; clicking Finish closes the wizard and saves the artifact. Now I open the schema artifact in the schema editor.  When I first view the schema’s content, I note that the types of all attributes in the Semantic Type (the right-hand panel) are strings and that the attribute names are the same as the field names in the data file.  To change an attribute’s name and type, I highlight the attribute and click Edit in the Attributes grouping on the Schema > Edit tab of the editor’s ribbon bar.  This opens the Edit Attribute window; I can change the attribute name and select the desired type from the “Data type” drop down control.  In this example, I change the name of each attribute to the name of the corresponding database table column (EmployeeID, StartingDate, TerminationDate, JobDescription, DepartmentID, and FinalSalary).  Then for the EmployeeID and DepartmentID attributes, I select Integer as the data type, for the StartingDate and TerminationDate attributes, I select Datetime as the data type, and for the FinalSalary attribute, I select the Decimal type. But I can do much more in the schema editor.  For the datetime attributes, I can set a constraint that ensures that the data adheres to some predetermined specifications; a starting date must be later than January 1, 1980 (the date on which the company began operations) and a termination date must be earlier than 11:59 PM on December 31, 1999.  I simply select the appropriate constraint and enter the value (1980-01-01 00:00 as the starting date and 1999-12-31 11:59 as the termination date). As a last step in setting up these datetime conversions, I edit the mapping, describing the format of each datetime type in the source file. I highlight the mapping line for the StartingDate attribute and click Edit Mapping in the Mappings grouping on the Schema > Edit tab of the editor’s ribbon bar.  This opens the Edit Mapping window in which I either enter, or select, a format that describes how the datetime values are represented in the file.  Note the use of Y01 as the syntax for the year.  This syntax is the indicator to expressor Studio to derive the century by setting any year later than 01 to the 20th century and any year before 01 to the 21st century.  As each datetime value is read from the file, the year values are transformed into century and year values. For the TerminationDate attribute, my format also indicates that the datetime value includes hours and minutes. And now to the Salary attribute. I open its mapping and in the Edit Mapping window select the Currency tab and the “Use currency” check box.  This indicates that the file data will include the dollar sign (or in Europe the Pound or Euro sign), which should be removed. And on the Grouping tab, I select the “Use grouping” checkbox and enter 3 into the “Group size” text box, a comma into the “Grouping character” text box, and a decimal point into the “Decimal separator” character text box. These entries allow the string to be properly converted into a decimal value. By making these entries into the schema that describes my input file, I’ve specified how I want the data transformed prior to writing to the database table and completely removed the requirement for coding within the data integration application itself. Assembling the data integration application is simple.  Onto the canvas I drag the Read File and Write Table operators, connecting the output of the Read File operator to the input of the Write Table operator. Next, I select the Read File operator and its Properties panel opens on the right-hand side of expressor Studio.  For each property, I can select an appropriate entry from the corresponding drop down control.  Clicking on the button to the right of the “File name” text box opens the file system location specified in the file connection artifact, allowing me to select the appropriate input file.  I indicate also that the first row in the file, the header row, should be skipped, and that any record that fails one of the datetime constraints should be skipped. I then select the Write Table operator and in its Properties panel specify the database connection, normal for the “Mode,” and the “Truncate” and “Create Missing Table” options.  If my target table does not yet exist, expressor will create the table using the information encapsulated in the schema artifact assigned to the operator. The last task needed to complete the application is to create the schema artifact used by the Write Table operator.  This is extremely easy as another wizard is capable of using the schema artifact assigned to the Read Table operator to create a schema artifact for the Write Table operator.  In the Write Table Properties panel, I click the drop down control to the right of the “Schema” property and select “New Table Schema from Upstream Output…” from the drop down menu. The wizard first displays the table description and in its second screen asks me to select the database connection artifact that specifies the RDBMS in which the target table will exist.  The wizard then connects to the RDBMS and retrieves a list of database schemas from which I make a selection.  The fourth screen gives me the opportunity to fine tune the table’s description.  In this example, I set the width of the JobDescription column to a maximum of 40 characters and select money as the type of the LastSalary column.  I also provide the name for the table. This completes development of the application.  The entire application was created through the use of wizards and the required data transformations specified through simple constraints and specifications rather than through coding.  To develop this application, I only needed a basic understanding of expressor Studio, a level of expertise that can be gained by working through a few introductory tutorials.  expressor Studio is as close to a point and click data integration tool as one could want and I urge you to try this product if you have a need to move data between files or from files to database tables. Check out CSVexpress in more detail.  It offers a few basic video tutorials and a preview of expressor Studio 3.5, which will support the reading and writing of data into Salesforce.com. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Documentation, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • Visual Studio 2008 “Format Document/Selection” command and a function named “assert” in JavaScript c

    - by AGS777
    Just have found some funny behavior of the Visual Studio 2008 editor.  Sorry if it is already well known bug. If you happened to have a JavaScript function named “assert” in your code (and there is pretty high likelihood in my opinion), for example something like: function assert(x, message) { if (x) console.log(message); } then when either Format Document (Ctrl + K, Ctrl + D) or Format Selection (Ctrl + K, Ctrl + F) command is applied to the document/block containing the function, the result of the formatting will be: functionassert(x, message) { if (x) console.log(message); } That’s it. function and assert are now joined into one solid word. So be aware of the fact in case you suddenly start receiving  strange exception in your JavaScript code: missing ; before statement functionassert(x, message) And no, it is not an April Fool's joke. Just try for yourself.

    Read the article

  • Problem with reformatting Sandisk read-only USB drive

    - by dimas
    Hi everyone I have a problem reformatting my USB drive. Its showing this error message whenever I tried to reformat it using gparted: Parted 0.11.0 --enable-libparted-dmraid Libparted 2.3 Format /dev/sdb1 as ntfs 00:00:01 ( ERROR ) calibrate /dev/sdb1 00:00:01 ( SUCCESS ) path: /dev/sdb1 start: 22,768 end: 31,248,383 size: 31,225,616 (14.89 GiB) set partition type on /dev/sdb1 00:00:00 ( ERROR ) libparted messages ( INFO ) Unable to open /dev/sdb read-write (Read-only file system). /dev/sdb has been opened read-only. Unable to open /dev/sdb read-write (Read-only file system). /dev/sdb has been opened read-only. Unable to open /dev/sdb read-write (Read-only file system). /dev/sdb has been opened read-only. Unable to open /dev/sdb read-write (Read-only file system). /dev/sdb has been opened read-only. Unable to open /dev/sdb read-write (Read-only file system). /dev/sdb has been opened read-only. Unable to open /dev/sdb read-write (Read-only file system). /dev/sdb has been opened read-only. Unable to open /dev/sdb read-write (Read-only file system). /dev/sdb has been opened read-only. Unable to open /dev/sdb read-write (Read-only file system). /dev/sdb has been opened read-only. Can't write to /dev/sdb, because it is opened read-only. Unable to open /dev/sdb read-write (Read-only file system). /dev/sdb has been opened read-only. Reason I wanted to reformat it is because it just suddenly stopped working when I was transferring files from my pc and also it erased every content that was saved in the usb drive. I tried various tutorials on the net regarding this however I can't find a solution on how to make a usb drive change its read-only property to read-write or anything that would enable me to reformat this usb drive. I have also checked this link format read-only USB drive but this one doesn't have a solution. Also am attempting to do this on Ubuntu 12.04.

    Read the article

  • How to open OLE2 file format document

    - by Dominic Jordan Hasford
    I received a file that is .OLE2 format by email and I can't get it open. When I try to open it with Document Viewer (default pfd program ubuntu) it says it can't support it. I searched in the Software Center for OLE2, and it returned a program called RipOle. It says it opens that format, but you have to run it in the terminal and I don't know how. Does any one know how to open OLE2 documents? Or do you know how to work ripole? Thanks in advanced!

    Read the article

  • How to Send Custom data in ADF/XML format

    - by Kaidul Islam Sazal
    I have built a lead generator form Here which will send email in ADF/XML format.But all I found in internet about ADF that there are limited tags like contact, customer, vendors, vehicle and their corresponding sub tags.I have to send these information via ADF/XML : Your Company Name App Name Description for iTunes and Google Keywords Image or logo for icon First splash page image Second splash page image About Company work schedule List up to 10 services your company provide Company Email Company Phone Company Website Company Facebook Company Youtube Company Twitter Company Google Comments My question is that, how I can send all this data in ADF/XML ? what will be tags for these ? What will be format? I didn't find any specific answers on it in internet.

    Read the article

  • How do I compare the md5sum of a file with the md5 file (that was available to download with the file)?

    - by user91583
    Images are available for a distro on http://livedistro.org/gnulinux/israel-remix-team-mint-12. I want to use the 32-bit version. I have downloaded the ISO file for the 32-bit version (customdist.iso). I have downloaded the md5 file for the ISO file (customdist.iso.md5). I want to calculate the md5sum of the ISO file and compare it to the md5 file. I can use the md5sum command to display within the terminal the calculated md5 for the ISO file. I have searched the web and can't find a way to compare the calculated md5 for the ISO file with the downloaded md5 file. So far, the closest I have come is the command md5sum -c customdist.iso.md5 from within the folder containing both the files, but this command gives the result: md5sum: customdist.iso.md5: no properly formatted MD5 checksum lines found Any ideas?

    Read the article

  • Solaris Tips : Assembler, Format, File Descriptors, Ciphers & Mount Points

    - by Giri Mandalika
    .roundedcorner { border:1px solid #a1a1a1; padding:10px 40px; border-radius:25px; } .boxshadow { padding:10px 40px; box-shadow: 10px 10px 5px #888888; } 1. Most Oracle software installers need assembler Assembler (as) is not installed by default on Solaris 11.      Find and install eg., # pkg search assembler INDEX ACTION VALUE PACKAGE pkg.fmri set solaris/developer/assembler pkg:/developer/[email protected] # pkg install pkg:/developer/assembler Assembler binary used to be under /usr/ccs/bin directory on Solaris 10 and prior versions.      There is no /usr/ccs/bin on Solaris 11. Contents were moved to /usr/bin 2. Non-interactive retrieval of the entire list of disks that format reports If the format utility cannot show the entire list of disks in a single screen on stdout, it shows some and prompts user to - hit space for more or s to select - to move to the next screen to show few more disks. Run the following command(s) to retrieve the entire list of disks in a single shot. format 3. Finding system wide file descriptors/handles in use Run the following kstat command as any user (privileged or non-privileged). kstat -n file_cache -s buf_inuse Going through /proc (process filesystem) is less efficient and may lead to inaccurate results due to the inclusion of duplicate file handles. 4. ssh connection to a Solaris 11 host fails with error Couldn't agree a client-to-server cipher (available: aes128-ctr,aes192-ctr,aes256-ctr,arcfour128,arcfour256,arcfour) Solution: add 3des-cbc to the list of accepted ciphers to sshd configuration file. Steps: Append the following line to /etc/ssh/sshd_config Ciphers aes128-ctr,aes192-ctr,aes256-ctr,arcfour128,arcfour256,\ arcfour,3des-cbc Restart ssh daemon svcadm -v restart ssh 5. UFS: Finding the last mount point for a device fsck utility reports the last mountpoint on which the filesystem was mounted (it won't show the mount options though). The filesystem should be unmounted when running fsck. eg., # fsck -n /dev/dsk/c0t5000CCA0162F7BC0d0s6 ** /dev/rdsk/c0t5000CCA0162F7BC0d0s6 (NO WRITE) ** Last Mounted on /export/oracle ** Phase 1 - Check Blocks and Sizes ... ...

    Read the article

  • Format numbers with css

    - by Luc M
    Is it possible to format numbers using css ? When I have 7000000.00, I would like it displayed as 7 000 000.00 I know I could write a backend (php, perl...) function or a javascript function that could return the formatted number but... The numbers that I want to format are into a cell. I would like to have something like <td class="myformat">7000000.00</td> or <td><span class="myformat">7000000.00<span></td>

    Read the article

  • Utility or technique for swapping files quickly in Windows

    - by foraidt
    I frequently need to swap one file with another, without overwriting the original. Let's say there are two files, foo_new.dll and foo.dll. I usually rename them the follwing way: foo.dll - foo_old.dll, foo_new.dll - foo.dll, [do something with replaced file], foo.dll - foo_new.dll, foo_old.dll - foo.dll. This is ok for a single file to swap but it becomes tedious when swapping multiple files at once. Is there a Windows (7 and preferrably XP) utility or a technique that simplifies this task and works well when swapping multiple files? I'd prefer to be able to use it from within FreeCommander but Windows Explorer would be ok, too.

    Read the article

  • minidlna and samsung tv file format doesn't support

    - by Vasya
    I have PC with XUbuntu 12.04 and Samsung 27A950, so I want to use my PC like dlna server. But I can run file of any format on my TV, get error: "File format doesn't support". I try to open *.avi, *.mkv and *.jpg files the same result. In error log I can see: [2013/06/27 12:34:03] upnphttp.c:1907: error: Error opening /home/family/media/file1.mkv [2013/06/27 12:34:06] upnphttp.c:1907: error: Error opening /home/family/media/file2.avi So, I have no idea why it doesn't work. Any suggestions? miniDLNA version: Version 1.24.1-stedy Config file is default, only few directories added. With version of the default Ubuntu repository was the same. Some time ago I tried mediatomb, had the same error.

    Read the article

  • elffile: ELF Specific File Identification Utility

    - by user9154181
    Solaris 11 has a new standard user level command, /usr/bin/elffile. elffile is a variant of the file utility that is focused exclusively on linker related files: ELF objects, archives, and runtime linker configuration files. All other files are simply identified as "non-ELF". The primary advantage of elffile over the existing file utility is in the area of archives — elffile examines the archive members and can produce a summary of the contents, or per-member details. The impetus to add elffile to Solaris came from the effort to extend the format of Solaris archives so that they could grow beyond their previous 32-bit file limits. That work introduced a new archive symbol table format. Now that there was more than one possible format, I thought it would be useful if the file utility could identify which format a given archive is using, leading me to extend the file utility: % cc -c ~/hello.c % ar r foo.a hello.o % file foo.a foo.a: current ar archive, 32-bit symbol table % ar r -S foo.a hello.o % file foo.a foo.a: current ar archive, 64-bit symbol table In turn, this caused me to think about all the things that I would like the file utility to be able to tell me about an archive. In particular, I'd like to be able to know what's inside without having to unpack it. The end result of that train of thought was elffile. Much of the discussion in this article is adapted from the PSARC case I filed for elffile in December 2010: PSARC 2010/432 elffile Why file Is No Good For Archives And Yet Should Not Be Fixed The standard /usr/bin/file utility is not very useful when applied to archives. When identifying an archive, a user typically wants to know 2 things: Is this an archive? Presupposing that the archive contains objects, which is by far the most common use for archives, what platform are the objects for? Are they for sparc or x86? 32 or 64-bit? Some confusing combination from varying platforms? The file utility provides a quick answer to question (1), as it identifies all archives as "current ar archive". It does nothing to answer the more interesting question (2). To answer that question, requires a multi-step process: Extract all archive members Use the file utility on the extracted files, examine the output for each file in turn, and compare the results to generate a suitable summary description. Remove the extracted files It should be easier and more efficient to answer such an obvious question. It would be reasonable to extend the file utility to examine archive contents in place and produce a description. However, there are several reasons why I decided not to do so: The correct design for this feature within the file utility would have file examine each archive member in turn, applying its full abilities to each member. This would be elegant, but also represents a rather dramatic redesign and re-implementation of file. Archives nearly always contain nothing but ELF objects for a single platform, so such generality in the file utility would be of little practical benefit. It is best to avoid adding new options to standard utilities for which other implementations of interest exist. In the case of the file utility, one concern is that we might add an option which later appears in the GNU version of file with a different and incompatible meaning. Indeed, there have been discussions about replacing the Solaris file with the GNU version in the past. This may or may not be desirable, and may or may not ever happen. Either way, I don't want to preclude it. Examining archive members is an O(n) operation, and can be relatively slow with large archives. The file utility is supposed to be a very fast operation. I decided that extending file in this way is overkill, and that an investment in the file utility for better archive support would not be worth the cost. A solution that is more narrowly focused on ELF and other linker related files is really all that we need. The necessary code for doing this already exists within libelf. All that is missing is a small user-level wrapper to make that functionality available at the command line. In that vein, I considered adding an option for this to the elfdump utility. I examined elfdump carefully, and even wrote a prototype implementation. The added code is small and simple, but the conceptual fit with the rest of elfdump is poor. The result complicates elfdump syntax and documentation, definite signs that this functionality does not belong there. And so, I added this functionality as a new user level command. The elffile Command The syntax for this new command is elffile [-s basic | detail | summary] filename... Please see the elffile(1) manpage for additional details. To demonstrate how output from elffile looks, I will use the following files: FileDescription configA runtime linker configuration file produced with crle dwarf.oAn ELF object /etc/passwdA text file mixed.aArchive containing a mixture of ELF and non-ELF members mixed_elf.aArchive containing ELF objects for different machines not_elf.aArchive containing no ELF objects same_elf.aArchive containing a collection of ELF objects for the same machine. This is the most common type of archive. The file utility identifies these files as follows: % file config dwarf.o /etc/passwd mixed.a mixed_elf.a not_elf.a same_elf.a config: Runtime Linking Configuration 64-bit MSB SPARCV9 dwarf.o: ELF 64-bit LSB relocatable AMD64 Version 1 /etc/passwd: ascii text mixed.a: current ar archive, 32-bit symbol table mixed_elf.a: current ar archive, 32-bit symbol table not_elf.a: current ar archive same_elf.a: current ar archive, 32-bit symbol table By default, elffile uses its "summary" output style. This output differs from the output from the file utility in 2 significant ways: Files that are not an ELF object, archive, or runtime linker configuration file are identified as "non-ELF", whereas the file utility attempts further identification for such files. When applied to an archive, the elffile output includes a description of the archive's contents, without requiring member extraction or other additional steps. Applying elffile to the above files: % elffile config dwarf.o /etc/passwd mixed.a mixed_elf.a not_elf.a same_elf.a config: Runtime Linking Configuration 64-bit MSB SPARCV9 dwarf.o: ELF 64-bit LSB relocatable AMD64 Version 1 /etc/passwd: non-ELF mixed.a: current ar archive, 32-bit symbol table, mixed ELF and non-ELF content mixed_elf.a: current ar archive, 32-bit symbol table, mixed ELF content not_elf.a: current ar archive, non-ELF content same_elf.a: current ar archive, 32-bit symbol table, ELF 64-bit LSB relocatable AMD64 Version 1 The output for same_elf.a is of particular interest: The vast majority of archives contain only ELF objects for a single platform, and in this case, the default output from elffile answers both of the questions about archives posed at the beginning of this discussion, in a single efficient step. This makes elffile considerably more useful than file, within the realm of linker-related files. elffile can produce output in two other styles, "basic", and "detail". The basic style produces output that is the same as that from 'file', for linker-related files. The detail style produces per-member identification of archive contents. This can be useful when the archive contents are not homogeneous ELF object, and more information is desired than the summary output provides: % elffile -s detail mixed.a mixed.a: current ar archive, 32-bit symbol table mixed.a(dwarf.o): ELF 32-bit LSB relocatable 80386 Version 1 mixed.a(main.c): non-ELF content mixed.a(main.o): ELF 64-bit LSB relocatable AMD64 Version 1 [SSE]

    Read the article

  • How to add support for the JPEG image format

    - by Samir Sabri
    After installing Imagemagick, I've tested it with jpg image, like this: identify 1.jpg But, I got this result: identify: no decode delegate for this image format `1.jpg' @ error/constitute.c/ReadImage/550. Then, I tried to add support for JPEG format by: yum install libjpeg libjpeg-devel but, I got: Setting up Install Process No package libjpeg available. No package libjpeg-devel available. Nothing to do I thought I need to update the apt-get, I did: apt-get install libjpeg libjpeg-devel but, I got: Reading package lists... Done Building dependency tree Reading state information... Done E: Unable to locate package libjpeg E: Unable to locate package libjpeg-devel Is there an easy way to get those libraries installed ? I am using Ubuntu 12.04.

    Read the article

  • Apps capable of mounting/unmounting CD/DVD Images with multi-sector or protected format

    - by Luis Alvarado
    What apps for Ubuntu exist that can mount/unmount CD/DVD images. Image formats like cue, bin, iso, nero format, etc... Needed features: Emulate protected CD/DVD like Daemon-Tools Mount multi-sector images Converting other formats (including multi-sector images) to ISO format. UPDATE - Updated question to "need the 3 features above instead of having them as optional". This 3 features are needed to be able to use images that have any of this features enable. For normal mounting/burning image apps you can see here: How can I graphically mount ISOs? Can I mount an ISO without administrative privileges? How burn or mount an ISO file?

    Read the article

  • VB.NET 2008, Windows 7 and saving files

    - by James Brauman
    Hello, We have to learn VB.NET for the semester, my experience lies mainly with C# - not that this should make a difference to this particular problem. I've used just about the most simple way to save a file using the .NET framework, but Windows 7 won't let me save the file anywhere (or anywhere that I have found yet). Here is the code I am using to save a text file. Dim dialog As FolderBrowserDialog = New FolderBrowserDialog() Dim saveLocation As String = dialog.SelectedPath ... Build up output string ... Try ' Try to write the file. My.Computer.FileSystem.WriteAllText(saveLocation, output, False) Catch PermissionEx As UnauthorizedAccessException ' We do not have permissions to save in this folder. MessageBox.Show("Do not have permissions to save file to the folder specified. Please try saving somewhere different.", "Error", MessageBoxButtons.OK, MessageBoxIcon.Error) Catch Ex As Exception ' Catch any exceptions that occured when trying to write the file. MessageBox.Show("Writing the file was not successful.", "Error", MessageBoxButtons.OK, MessageBoxIcon.Error) End Try The problem is that this using this code throws an UnauthorizedAccessException no matter where I try to save the file. I've tried running the .exe file as administrator, and the IDE as administrator. Is this just Windows 7 being overprotective? And if so, what can I do to solve this problem? The requirements state that I be able to save a file! Thanks.

    Read the article

< Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >