Search Results

Search found 40479 results on 1620 pages for 'binary files'.

Page 32/1620 | < Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >

  • How to distinguish doc, ppt, xls files, without looking at file extension

    - by Shelby. S
    So I was wondering how would you differentiate ppt, xls and doc files from each other in linux regardless of extensions. I tried 'file' but from the looks of it, all of MSOffice files are categorized under the same file type. Similarly I'm having trouble with docx, xlsx and pptx files, since they're essentially all zip files containing a bunch of xml. I also tried a python script importing the magic module, but no go. I'm trying to identify the actual file for a sandbox analysis. And for this specific purpose I need to find the actual file type in order to run it in the sandbox vm (the Windows vm runs everything by extension). Let's say my sample file is labeled as try.exe, but in reality it's just a doc file. My script will rename it as try.exe.doc, which would work fine for doc files. But since linux identifies all MSOffice files as simple DOC files then there's no way to identify ppt or xls files. As a result the sandbox wont' analyze the sample correctly.

    Read the article

  • Hosting files with support for file tagging / keywords

    - by Zev Chonoles
    I have a large (approx. 25GB) collection of files I would like to host online for people to view or download. I have a spare computer I can use as a dedicated server for these files. I'm looking for a method of, or piece of software for, hosting my files where I can assign tags or keywords to the files, and people viewing my files online can search the collection via the tags. By way of approximate solutions I've found so far, I see that there is software such as Collectorz.com or Readerware for creating databases of one's books / music / movies, and these databases can be searched by tags or keywords, and the databases can be made available and searchable online; this would suit my purposes except that my files are not necessarily books, music, or movies, and I want the files themselves accessible online, not a database describing my files. A commercially-available solution like the ones above would be acceptable, but I'd prefer to have the whole setup under my control (i.e. I'd like to either implement it by hand, or use commercial software that doesn't rely on using the company's servers, paying them a continued fee, etc.). The current extent of my internet experience is designing a few Google Sites, so I know there's a fair chance I won't understand the answers I receive, but I'm always happy to have a summer project :)

    Read the article

  • IPS Facets and Info files

    - by mkupfer
    One of the unusual things about IPS is its "facet" feature. For example, if you're a developer using the foo library, you don't install a libfoo-dev package to get the header files. Intead, you install the libfoo package, and your facet.devel setting controls whether you get header files. I was reminded of this recently when I tried to look at some documentation for Emacs Org mode. I was surprised when Emacs's Info browser said it couldn't find the top-level Info directory. I poked around in /usr/share but couldn't find any info files. $ ls -l /usr/share/info ls: cannot access /usr/share/info: No such file or directory Was I was missing a package? $ pkg list -a | egrep "info|emacs" editor/gnu-emacs 23.1-0.175.0.0.0.2.537 i-- editor/gnu-emacs/gnu-emacs-gtk 23.1-0.175.0.0.0.2.537 i-- editor/gnu-emacs/gnu-emacs-lisp 23.1-0.175.0.0.0.2.537 --- editor/gnu-emacs/gnu-emacs-no-x11 23.1-0.175.0.0.0.2.537 --- editor/gnu-emacs/gnu-emacs-x11 23.1-0.175.0.0.0.2.537 i-- system/data/terminfo 0.5.11-0.175.0.0.0.2.1 i-- system/data/terminfo/terminfo-core 0.5.11-0.175.0.0.0.2.1 i-- text/texinfo 4.7-0.175.0.0.0.2.537 i-- x11/diagnostic/x11-info-clients 7.6-0.175.0.0.0.0.1215 i-- $ Hmm. I didn't have the gnu-emacs-lisp package. That seemed an unlikely place to stick the Info files, and pkg(1) confirmed that the info files were not there: $ pkg contents -r gnu-emacs-lisp | grep info usr/share/emacs/23.1/lisp/info-look.el.gz usr/share/emacs/23.1/lisp/info-xref.el.gz usr/share/emacs/23.1/lisp/info.el.gz usr/share/emacs/23.1/lisp/informat.el.gz usr/share/emacs/23.1/lisp/org/org-info.el.gz usr/share/emacs/23.1/lisp/org/org-jsinfo.el.gz usr/share/emacs/23.1/lisp/pcvs-info.el.gz usr/share/emacs/23.1/lisp/textmodes/makeinfo.el.gz usr/share/emacs/23.1/lisp/textmodes/texinfo.el.gz $ Well, if I have what look like the right packages but don't have the right files, the next thing to check are the facets. The first check is whether there is a facet associated with the Info files: $ pkg contents -m gnu-emacs | grep usr/share/info dir facet.doc.info=true group=bin mode=0755 owner=root path=usr/share/info file [...] chash=[...] facet.doc.info=true group=bin mode=0444 owner=root path=usr/share/info/mh-e-1 [...] file [...] chash=[...] facet.doc.info=true group=bin mode=0444 owner=root path=usr/share/info/mh-e-2 [...] [...] Yes, they're associated with facet.doc.info. Now let's look at the facet settings on my desktop: $ pkg facet FACETS VALUE facet.locale.en* True facet.locale* False facet.doc.man True facet.doc* False $ Oops. I've got man pages and various English documentation files, but not the Info files. Let's fix that: # pkg change-facet facet.doc.info=True Packages to update: 970 Variants/Facets to change: 1 Create boot environment: No Create backup boot environment: Yes Services to change: 1 DOWNLOAD PKGS FILES XFER (MB) Completed 970/970 181/181 9.2/9.2 PHASE ACTIONS Install Phase 226/226 PHASE ITEMS Image State Update Phase 2/2 PHASE ITEMS Reading Existing Index 8/8 Indexing Packages 970/970 # Now we have the info files: $ ls -F /usr/share/info a2ps.info dir@ flex.info groff-2 regex.info aalib.info dired-x flex.info-1 groff-3 remember ...

    Read the article

  • Clean conflicting class files from Temporary ASP.NET Files

    - by Deepfreezed
    Class file Conflicts in C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\Temporary ASP.NET Files\ is preventing me from building the solution. Even though I try emptying out the folder, each time Visual Studio starts the build process, it brings in the class file in to the temp folder with the same folder name. If I restart the machine or leave it overnight, project build without error. Is there anyway to tell Visual studio to delete/ignore/clean any lingering class files that could be in the temp folder? Clean solution option in VS doesn't work either. Class file in conflict are from the App_Code folder.

    Read the article

  • Is it possible to exclude some files from checkin (TFS) ?

    - by Thomas Wanner
    We use configuration files within various projects under source control (TFS), where each developer has to make some adjustments in his local copy to configure his environment. The build process takes care about replacing the config files with the server configuration as a part of the deployment, so it doesn't actually matter what is in the repository. However, we would anyway like to keep some kind of a default non-breaking version of config files in the repository, so that e.g. people not involved in the particular project won't run into troubles because of local misconfiguration. We tried to resolve this by introducing the check-in policy that simply forbids to check-in the config files. This works fine, but just because we're lazy to always uncheck those checkboxes in the pending changes window, the question comes : is it possible to transparently disable the check-in of particular files without keeping them out of source control (e.g. locking their current version) ?

    Read the article

  • How to convert Xml files to Text Files 2

    - by John
    Hi all, I have around 8000 xml files that needs to be converted into text files. The text file must contain title, description and keywords of the xml file without the tags and removing other elements and attributes as well. In other words, i need to create 8000 text files containing the title,description and keywords of the xml file. I need codings for this to be done systematically. Any help would be greatly appreciated. Thanks in advance. Hey all thank you all so so much with your replies. Here's a sample of what my xml looks like: <?xml version="1.0"?> <metadata> <identifier>43productionsNightatthegraveyard</identifier> <title>Night at the graveyard</title> <collection>opensource_movies</collection> <mediatype>movies</mediatype> <resource>movies</resource> <upload_application appid="ccPublisher" version="2.2.1"/> <uploader>[email protected]</uploader> <description>una noche en el cementerio (terror)</description> <license>http://creativecommons.org/licenses/by-nc/3.0/</license> <title>Night at the graveyard</title> <format>Video</format> <adder>[email protected]</adder> <licenseurl>http://creativecommons.org/licenses/by-nc/3.0/</licenseurl> <year>2007</year> <keywords>Night,at,the,graveyard,43,productions</keywords> <holder>43 productions</holder> <publicdate>2007-04-11 19:52:28</publicdate> </metadata> And this would be the output: una noche en el cementerio (terror) Night at the graveyard Night,at,the,graveyard,43,productions This need to be saved with the same name but in text format. Thanks all so much if any more suggestions would be much appreciated.

    Read the article

  • to write log files in two different files

    - by Sun
    my application run on customized client framework,the client framework used log4net to log their own log files. we are(our application) has to use the same log4net to log our log files in our own path(say our customized path). currently the our log files are created but log are not writing in that file.it is writting in the client framework log file. searched lot of sites the link http://stackoverflow.com/questions/308436/log4net-programmatcially-specify-multiple-loggers-with-multiple-file-appenders helped me to configure the log4net config programatically, still im log statemets are not written in my log file.the code used as below public class TraceLog { private string message = string.Empty; private static ILog ILogger = null; private static TraceLog instance = new TraceLog(); private TraceLog() { SetLevel("Log4net.MainForm", "ALL"); AddAppender("Log4net.MainForm", CreateFileAppender("FileAppender", "C:\\mylog.log")); } public static TraceLog Instance { get { return instance; } } public void Debug(string logMessage) { message = PrepareLog(logMessage); ILogger.Debug(message); } protected string PrepareLog(string logMessage) { string message = GetFileMethodLineNumberInfo(); message += logMessage; return message; } protected string GetFileMethodLineNumberInfo() { StackTrace stackTrace = new StackTrace(true); // The position 3 is relative to the index of the specified method StackFrame stackFrame = stackTrace.GetFrame(3); return (stackFrame.GetMethod().DeclaringType.Name + "/" + stackFrame.GetMethod().Name + "/" + stackFrame.GetFileLineNumber() + ":"); } private static void SetLevel(string loggerName, string levelName) { ILogger = LogManager.GetLogger(loggerName); log4net.Repository.Hierarchy.Logger l = (log4net.Repository.Hierarchy.Logger)ILogger.Logger; l.Level = l.Hierarchy.LevelMap[levelName]; } private static void AddAppender(string loggerName, IAppender appender) { ILogger = LogManager.GetLogger(loggerName); log4net.Repository.Hierarchy.Logger l = (log4net.Repository.Hierarchy.Logger)ILogger.Logger; l.AddAppender(appender); } private static IAppender CreateFileAppender(string name, string fileName) { FileAppender appender = new FileAppender(); appender.Name = name; appender.File = fileName; appender.AppendToFile = true; //PatternLayout layout = new PatternLayout(); //layout.ConversionPattern = "%d [%t] %-5p %c [%x] - %m%n"; //layout.ActivateOptions(); //appender.Layout = layout; appender.ActivateOptions(); return appender; } } } anyone pls help how to solve this

    Read the article

  • Extract files from zip folder and store these files in blobstore

    - by Eng_Engineer
    i want to upload zip folder from file input in form the i want to extract the contents of this uploaded zip folder,and store the contents (files)of this zip in the blobstore in order to download them after putting these files in one folder,but the problem is that i can't deal with the zip folder directly(to read it), i tried as this: form = cgi.FieldStorage() file_upload = form['file'] zip1=file_upload.filename zipstream=StringIO.StringIO(zip1.read()) But the problem still that i can't read the zip as previous,also i tried to read zip folder directly like this: z1=zipfile.ZipFile(zip1,"r") But there was an error in this way.Please can any one help me.Thanks in advance.

    Read the article

  • How do XAML files associate with cs files?

    - by LLS
    It seems that XAML files should have corresponding .cs files in a C# project. I know Visual Studio does all the things for us. I'm just curious how they are linked together? I mean, are they specified in the project file, or just because they have the same names? And also, App.xaml file specifies the startup file, but how does the compiler know? Is it possible to appoint another file other than App.xaml to do the same things as App.xaml does?

    Read the article

  • How to delete files with a Python script from a FTP server which are older than 7 days?

    - by Tom
    Hello I would like to write a Python script which allows me to delete files from a FTP Server after they have reached a certain age. I prepared the scipt below but it throws the error message: WindowsError: [Error 3] The system cannot find the path specified: '/test123/*.*' Do someone have an idea how to resolve this issue? Thank you in advance! import os, time from ftplib import FTP ftp = FTP('127.0.0.1') print "Automated FTP Maintainance" print 'Logging in.' ftp.login('admin', 'admin') # This is the directory that we want to go to directory = 'test123' print 'Changing to:' + directory ftp.cwd(directory) files = ftp.retrlines('LIST') print 'List of Files:' + files # ftp.remove('LIST') #------------------------------------------- now = time.time() for f in os.listdir(directory): if os.stat(f).st_mtime < now - 7 * 86400: if os.directory.isfile(f): os.remove(os.directory.join(directory, f)) #except: #exit ("Cannot delete files") #------------------------------------------- print 'Closing FTP connection' ftp.close()

    Read the article

  • How are files (especially audio files) organized internally?

    - by mystify
    I try to grok that: Apple is talking about "packets" in audio files, and there is a fancy function called AudioFileReadPackets which takes a lot of arguments. One of them specifies the "start packet", and another one the number of packets which you want to read. So I imagine an audio file to look like this, internally: It's made up of a lot of packets. If it's an audio file which has an variable bit rate format, then every packet may have a different size. If the file has an constant bit rate format, then every packet is the same size. So an audio file is like a truck full of boxes, and every box contains some interesting stuff. Is that correct? Does it apply to any kind of file? Is this how files actually look like?

    Read the article

  • Analyze VS2010 C# projects and report files on disk not part of the projects?

    - by Lasse V. Karlsen
    I discovered earlier tonight that files and folders I have removed from my C# projects are apparently still on disk, even though my Visual Studio Mercurial plugin seems to do a good job of deleting them when I delete them in Visual Studio. It must have hickuped when it came to these files. So I wondered... Does anyone have a script or similar, or know of something, that will look at my .csproj files and report extra files and folders on my disk that isn't part of the project files? I just want to clean up my repository contents.

    Read the article

  • Fragmented Log files could be slowing down your database

    - by Fatherjack
    Something that is sometimes forgotten by a lot of DBAs is the fact that database log files get fragmented in the same way that you get fragmentation in a data file. The cause is very different but the effect is the same – too much effort reading and writing data. Data files get fragmented as data is changed through normal system activity, INSERTs, UPDATEs and DELETEs cause fragmentation and most experienced DBAs are monitoring their indexes for fragmentation and dealing with it accordingly. However, you don’t hear about so many working on their log files. How can a log file get fragmented? I’m glad you asked. When you create a database there are at least two files created on the disk storage; an mdf for the data and an ldf for the log file (you can also have ndf files for extra data storage but that’s off topic for now). It is wholly possible to have more than one log file but in most cases there is little point in creating more than one as the log file is written to in a ‘wrap-around’ method (more on that later). When a log file is created at the time that a database is created the file is actually sub divided into a number of virtual log files (VLFs). The number and size of these VLFs depends on the size chosen for the log file. VLFs are also created in the space added to a log file when a log file growth event takes place. Do you have your log files set to auto grow? Then you have potentially been introducing many VLFs into your log file. Let’s get to see how many VLFs we have in a brand new database. USE master GO CREATE DATABASE VLF_Test ON ( NAME = VLF_Test, FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL10.ROCK_2008\MSSQL\DATA\VLF_Test.mdf', SIZE = 100, MAXSIZE = 500, FILEGROWTH = 50 ) LOG ON ( NAME = VLF_Test_Log, FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL10.ROCK_2008\MSSQL\DATA\VLF_Test_log.ldf', SIZE = 5MB, MAXSIZE = 250MB, FILEGROWTH = 5MB ); go USE VLF_Test go DBCC LOGINFO; The results of this are firstly a new database is created with specified files sizes and the the DBCC LOGINFO results are returned to the script editor. The DBCC LOGINFO results have plenty of interesting information in them but lets first note there are 4 rows of information, this relates to the fact that 4 VLFs have been created in the log file. The values in the FileSize column are the sizes of each VLF in bytes, you will see that the last one to be created is slightly larger than the others. So, a 5MB log file has 4 VLFs of roughly 1.25 MB. Lets alter the CREATE DATABASE script to create a log file that’s a bit bigger and see what happens. Alter the code above so that the log file details are replaced by LOG ON ( NAME = VLF_Test_Log, FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL10.ROCK_2008\MSSQL\DATA\VLF_Test_log.ldf', SIZE = 1GB, MAXSIZE = 25GB, FILEGROWTH = 1GB ); With a bigger log file specified we get more VLFs What if we make it bigger again? LOG ON ( NAME = VLF_Test_Log, FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL10.ROCK_2008\MSSQL\DATA\VLF_Test_log.ldf', SIZE = 5GB, MAXSIZE = 250GB, FILEGROWTH = 5GB ); This time we see more VLFs are created within our log file. We now have our 5GB log file comprised of 16 files of 320MB each. In fact these sizes fall into all the ranges that control the VLF creation criteria – what a coincidence! The rules that are followed when a log file is created or has it’s size increased are pretty basic. If the file growth is lower than 64MB then 4 VLFs are created If the growth is between 64MB and 1GB then 8 VLFs are created If the growth is greater than 1GB then 16 VLFs are created. Now the potential for chaos comes if the default values and settings for log file growth are used. By default a database log file gets a 1MB log file with unlimited growth in steps of 10%. The database we just created is 6 MB, let’s add some data and see what happens. USE vlf_test go -- we need somewhere to put the data so, a table is in order IF OBJECT_ID('A_Table') IS NOT NULL DROP TABLE A_Table go CREATE TABLE A_Table ( Col_A int IDENTITY, Col_B CHAR(8000) ) GO -- Let's check the state of the log file -- 4 VLFs found EXECUTE ('DBCC LOGINFO'); go -- We can go ahead and insert some data and then check the state of the log file again INSERT A_Table (col_b) SELECT TOP 500 REPLICATE('a',2000) FROM sys.columns AS sc, sys.columns AS sc2 GO -- insert 500 rows and we get 22 VLFs EXECUTE ('DBCC LOGINFO'); go -- Let's insert more rows INSERT A_Table (col_b) SELECT TOP 2000 REPLICATE('a',2000) FROM sys.columns AS sc, sys.columns AS sc2 GO 10 -- insert 2000 rows, in 10 batches and we suddenly have 107 VLFs EXECUTE ('DBCC LOGINFO'); Well, that escalated quickly! Our log file is split, internally, into 107 fragments after a few thousand inserts. The same happens with any logged transactions, I just chose to illustrate this with INSERTs. Having too many VLFs can cause performance degradation at times of database start up, log backup and log restore operations so it’s well worth keeping a check on this property. How do we prevent excessive VLF creation? Creating the database with larger files and also with larger growth steps and actively choosing to grow your databases rather than leaving it to the Auto Grow event can make sure that the growths are made with a size that is optimal. How do we resolve a situation of a database with too many VLFs? This process needs to be done when the database is under little or no stress so that you don’t affect system users. The steps are: BACKUP LOG YourDBName TO YourBackupDestinationOfChoice Shrink the log file to its smallest possible size DBCC SHRINKFILE(FileNameOfTLogHere, TRUNCATEONLY) * Re-size the log file to the size you want it to, taking in to account your expected needs for the coming months or year. ALTER DATABASE YourDBName MODIFY FILE ( NAME = FileNameOfTLogHere, SIZE = TheSizeYouWantItToBeIn_MB) * – If you don’t know the file name of your log file then run sp_helpfile while you are connected to the database that you want to work on and you will get the details you need. The resize step can take quite a while This is already detailed far better than I can explain it by Kimberley Tripp in her blog 8-Steps-to-better-Transaction-Log-throughput.aspx. The result of this will be a log file with a VLF count according to the bullet list above. Knowing when VLFs are being created By complete coincidence while I have been writing this blog (it’s been quite some time from it’s inception to going live) Jonathan Kehayias from SQLSkills.com has written a great article on how to track database file growth using Event Notifications and Service Broker. I strongly recommend taking a look at it as this is going to catch any sneaky auto grows that take place and let you know about them right away. Hassle free monitoring of VLFs If you are lucky or wise enough to be using SQL Monitor or another monitoring tool that let’s you write your own custom metrics then you can keep an eye on this very easily. There is a custom metric for VLFs (written by Stuart Ainsworth) already on the site and there are some others there are very useful so take a moment or two to look around while you are there. Resources MSDN – http://msdn.microsoft.com/en-us/library/ms179355(v=sql.105).aspx Kimberly Tripp from SQLSkills.com – http://www.sqlskills.com/BLOGS/KIMBERLY/post/8-Steps-to-better-Transaction-Log-throughput.aspx Thomas LaRock at Simple-Talk.com – http://www.simple-talk.com/sql/database-administration/monitoring-sql-server-virtual-log-file-fragmentation/ Disclosure I am a Friend of Red Gate. This means that I am more than likely to say good things about Red Gate DBA and Developer tools. No matter how awesome I make them sound, take the time to compare them with other products before you contact the Red Gate sales team to make your order.

    Read the article

  • How to find hidden/cloak files in Windows 2003?

    - by homemdelata
    Here is the point. I set Windows to display all the hidden files and protected operating system files but even after that, my antivirus (Kaspersky) is still getting a ".dll" file on "c:\windows\system32" saying it's a riskware 'Hidden.Object'. I tried to find this file everytime but it's not there. So I asked one of the developers to create a service that verifies the folder each 5 seconds and, if it founds the file, copies to another place. If it copies to another place with the same name and extension, I still can't find the file on the other folder but Kaspersky now founds both. If I keep the same name but with a different extension, like ".temp123", I still can't find the file. Lastly, I created an empty text file and renamed with the same name as the other one, the file just gone too. After all this research It's clear that every file with this same name on this specific server gets cloak, doesn't matter the file extension. I created a file with this same name on another server and nothing happens, the file is still there without problem. How can I found this kind of file? How can I "uncloak" it? How can I know what this file is doing?

    Read the article

  • How many files in a directory is too many?

    - by Kip
    Does it matter how many files I keep in a single directory? If so, how many files in a directory is too many, and what are the impacts of having too many files? (This is on a Linux server.) Background: I have a photo album website, and every image uploaded is renamed to an 8-hex-digit id (say, a58f375c.jpg). This is to avoid filename conflicts (if lots of "IMG0001.JPG" files are uploaded, for example). The original filename and any useful metadata is stored in a database. Right now, I have somewhere around 1500 files in the images directory. This makes listing the files in the directory (through FTP or SSH client) take a few seconds. But I can't see that it has any affect other than that. In particular, there doesn't seem to be any impact on how quickly an image file is served to the user. I've thought about reducing the number of images by making 16 subdirectories: 0-9 and a-f. Then I'd move the images into the subdirectories based on what the first hex digit of the filename was. But I'm not sure that there's any reason to do so except for the occasional listing of the directory through FTP/SSH.

    Read the article

  • What is the usage of Splay Trees in the real world?

    - by Meena
    I decided to learn about balanced search trees, so I picked 2-3-4 and splay trees. What are the examples of splay trees usage in the real world? In this Cornell: http://www.cs.cornell.edu/courses/cs3110/2009fa/recitations/rec-splay.html I read that splay trees are 'A good example is a network router'. But from rest of the explanation seams like network routers use hash tables and not splay trees since the lookup time is constant instead of O(log n).

    Read the article

  • error while loading shared libraries, file too short

    - by tommyk
    From one of my customers I got an application. When I try to run it I got following error error while loading shared libraries: ./libvtkWidgets.so.5.4: file too short In my project structure I see following: -rwxrwxrwx 1 tomasz tomasz 20 2011-02-01 10:44 libvtkWidgets.so -rwxrwxrwx 1 tomasz tomasz 22 2011-02-01 10:44 libvtkWidgets.so.5.4 -rwxrwxrwx 1 tomasz tomasz 2147103 2011-02-01 10:44 libvtkWidgets.so.5.4.2 Is my shared library libvtkWidgets corrupted ? How to solve that error ?

    Read the article

  • VSDB to SSDT Part 1 : Converting projects and trimming excess files

    - by Etienne Giust
    Visual Studio 2012 introduces a change regarding Database Projects : they now use the SSDT technology, which means old VS2010 database projects (VSDB projects) need to be converted. Hopefully, VS2012 does that for you and it is quite painless, but in my case some unnecessary artifacts from the old project were left in place.  Also, when reopening the solution, database projects appeared unconverted even if I had converted them in the previous session and saved the solution.   Converting the project(s) When opening your Visual Studio 2010 solution with Visual Studio 2012, every standard project should be converted by default, but Visual Studio will ask you about your database projects : “Functional changes required Visual Studio will automatically make functional changes to the following projects in order to open them. The project behavior will change as a result. You will be able to open these projects in this version and Visual Studio 2010 SP1.” If you accept, your project is converted. And it should compile with no errors right away except if you have dependencies to dbschema files which are no longer supported.   The output of a SSDT project is a dacpac file which replaces the dbschema file you were accustomed to. References to dacpac files can be added to SSDT projects in the same fashion references to dbschema could be added to VSDB projects.   Cleaning up You will notice that your project file is now a sqlproj file but the old dbproj is still here. In fact at that point you can still reopen the solution in Visual Studio 2010 and everything should show up.   If like me you plan on using VS2012 exclusively, you can get rid of the following files which are still on your disk and in your source control : the dbproj and dbproj.vspscc files Properties/Database.sqlcmdvars Properties/Database.sqldeployment Properties/Database.sqlpermissions Properties/Database.sqlsettings   You might wonder where the information which used to be in the Properties files is now stored. Permissions : a Permissions.sql was created at the root level of your project. Note that when you create a new database project and import a database using the Schema Compare capabilities from Visual Studio, imported table and stored procedure definition files will hold the permission information (along with constraints and, indexes) SQLVars : they are defined inside the publish.xml files Deployment : they are also in the publish.xml files Settings : I was unable to find where those are now. I suppose they are not defined anymore   But Visual Studio still says my database projects should be converted ! I had this error upon closing and then re-opening the solution : my database projects would appear unconverted even though I did all the necessary steps previously.   Easy solution : remove those projects from the solution and add them again (the sqlproj files).   More For those who run into problems when converting from VSDB to SSDT, I suggest reading the following post : http://blogs.msdn.com/b/ssdt/archive/2011/11/21/top-vsdb-gt-ssdt-project-conversion-issues.aspx   Also interesting, is a side by side comparison of VSDB and SSDT project features : http://blogs.msdn.com/b/ssdt/archive/2011/11/21/sql-server-data-tools-ctp4-vs-vs2010-database-projects.aspx

    Read the article

  • Automatically analyze excel files

    - by dole doug
    I have to replicate a manual generation of a large number of excel files. I started to manually track the relations between cells ( files, formulas, etc). I also had a talk with the person which generates those files. For now I have a general understanding about how the excel files are generated, but "devil is in the details". I assume that I can write a script which will generate the hierarchy between cells and files, but this might require the same effort as manually noticing the relations. Also, I'm afraid that I'm not too experienced and my app is more prone to error approach than a manual analyze. How to handle this problem? Do you know about an open source project which analyze the excel files in a recursive mode following the formulas ?

    Read the article

  • Files lost after reboot in Windows

    - by Vikram
    Hi, I have a dual boot OS(Ubuntu 10.10 and Windows 7) in my laptop. I recently copied some important files from a pen drive to one of my NTFS drives from Ubuntu. After copying i was able to view the files in the directory and hence i formatted the pen drive. The problem occurred when i rebooted the system in Windows, i could not find any of the files or the folders i copied in the drive. I again rebooted into Ubuntu and could not find it there also. What is the reason my files got deleted? Is there any way to recover them? Note : For your information my Windows was in hibernation state when i was copying the files in Ubuntu. Will this affect the files in any manner?

    Read the article

  • Role of linking, object files and executables

    - by Tim
    For a C or assembly program that does not require any other library, will linking be necessary? In other words, will conversion from C to Assembly and/or from Assembly to an object file be enough without being followed by linking? If linking is still needed, what will it do, given that there is just one object file which doesn't need a library to link to? Relatedly, how different are object files and executable files, given that in Linux, both have file format ELF? Are object files those ELF files that are not runnable? Are there some executable files that can be linked to object files? If yes, does it mean dynamical linking of executables to shared libraries?

    Read the article

  • How is this number calculated?

    - by Hamid
    I have numbers; A == 0x20000000 B == 18 C == (B/10) D == 0x20000004 == (A + C) A and D are in hex, but I'm not sure what the assumed numeric bases of the others are (although I'd assume base 10 since they don't explicitly state a base. It may or may not be relevant but I'm dealing with memory addresses, A and D are pointers. The part I'm failing to understand is how 18/10 gives me 0x4. Edit: Code for clarity: *address1 (pointer is to address: 0x20000000) printf("Test1: %p\n", address1); printf("Test2: %p\n", address1+(18/10)); printf("Test3: %p\n", address1+(21/10)); Output: Test1: 0x20000000 Test2: 0x20000004 Test3: 0x20000008

    Read the article

  • What is the usage of Spaly Trees in the real world?

    - by Meena
    I decided to learn about Balance search trees, so I picked 2-3-4 and splay trees. I'm wondering what are the examples of splay trees usage in the real world? In this Cornell: http://www.cs.cornell.edu/courses/cs3110/2009fa/recitations/rec-splay.html I read that splay trees are 'A good example is a network router'. But from rest of the explanation seams like network routers use hash tables and not splay trees since the lookup time is constant instead of O(log n). thanks!

    Read the article

  • Copy hangs in local network

    - by umpirsky
    I want to copy files from one Ubuntu system to another.They are both in local wireless network. I shared entire home dir on one, and tried to copy some files to another. It works for some time and then it just hangs at some point. I use Nautilus to copy files. Here is the example screenshot, it just hangs like this: After I cancel, Nautilus icon in lauchbar keeps progress bar, so I guess there is some problem. What can be the problem?

    Read the article

  • How to check last changes in filesystem or directory with bash?

    - by Robert Vila
    After the system unmounted the root partition I detected that some files are missing in the filesystem. wifi and the gwibber icons disappeared from the indicator applet I want to check if there are other files missing using the ls program and the locate program, which woks on indexes of a previous state of the filesystem. Thus, locate '/usr/share/icons/*' | xargs ls -d 2>&1 >/dev/null serves for that purpose, and I can count the nonexistent files like this: locate '/usr/share/icons/*' | xargs ls -d 2>&1 >/dev/null | wc -l except for the case where filenames have blank spaces in them; and, not very surprisingly, that is the case with Ubuntu (OMG!! It is no longer "forbidden" like in good old times). If I use: locate '/usr/share/icons/*' | xargs -Iñ ls -d 'ñ' 2>&1 >/dev/null it is not working because there is some kind of interference in the syntax between the redirections of the standard outputs and the use of the parameter -I. Can anyone please help me with this syntax or giving another idea?

    Read the article

< Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >